Thanks for taking the time to read my thoughts about Visual Business Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions that are either too urgent to wait for a full-blown article or too limited in length, scope, or development to require the larger venue. For a selection of articles, white papers, and books, please visit my library.

 

What Qualifies as Engagement?

December 23rd, 2015

When vague or ambiguous terms are used without defining them, confusion results. The term Big Data is a prolific example. An entire industry has been built up around a term that no two people define the same. In this particular case, the confusion is useful to vendors and consultants who want to sell you so-called Big Data products and services. In the field of data visualization, the term engagement is being tossed about more and more, without a clear definition. Research papers are being written that make claims about engagement without declaring what they mean by the term. This creates a minefield of confusion.

This issue came up in a recent conversation in my discussion forum between Enrico Bertini and me. Enrico was using the term engagement to describe attributes of data visualizations that are eye-catching. I responded to Enrico that engagement, in my opinion, involves something more than merely catching the eye. When people use the term engagement when discussing data visualization, they tend to always use it in a positive manner. It is assumed that engagement is useful. This, however, is not a good assumption. We can certainly become engaged in activities that are less than useful—harmful even.

Measuring the degree to which someone becomes engaged in viewing or in some way using a visualization can be useful, but only if we clearly define what we mean by the term and choose a metric that indeed measures it, which is rarely done. Using eye-tracking technology to measure the amount of time someone looks at something in particular is a common way that engagement is measured in research studies, and in these studies it is always assumed but rarely explained why engagement defined and measured in this manner is beneficial. Someone might look at a particular visualization or a particular component of a visualization because it’s visually unusual or eye-catching, but not in a manner that is informative. It is also possible that someone may look at something for a long time because they are struggling to make sense of it, which is likely a problem. The struggle would only be useful if it results in understanding something that couldn’t be understood more easily had the data been displayed in another way.

Let me illustrate with a specific example. Recently, Steve Haroz, Robert Kosara, and Steven L. Franconeri wrote a research paper titled “The Connected Scatterplot for Presenting Paired Time Series.” The purpose of the study was to test the potential usefulness of an unusual version of a scatterplot that a few graphical journalists have produced in recent years. When we want to examine how two quantitative variables changed through time—variables that don’t share a common quantitative scale (e.g., monthly sales revenues in dollars versus the number of items sold)—we would ideally use two line graphs, one positioned above the other, or if our audience is not confused by dual-axis graphs, we could use a single line graph with a scale on the left for one variable and a scale on the right for the other. The same values can be displayed in a scatterplot, however, with a single data point for each time period, which encodes the two variables based on the position along the X axis for one variable and along the Y axis for the other. To show the chronological sequence, the dots are connected with a line and labeled sequentially. Here’s a simple example from the paper:

Connected Scatterplot

And here’s the same data shown as a dual-axis line chart:

Dual-Axis Line Chart

In this particular case, a dual-axis graph wouldn’t actually be needed because both variables share the same quantitative scale, but you can imagine that they don’t.

Although it seems obvious that, even with an extremely simple data set such as this, the connected scatterplot is more difficult to read than the dual-axis line chart, it certainly didn’t hurt to do an experiment to confirm this. In fact, these researchers confirmed that connected scatterplots are difficult to read and produce more errors in understanding. What’s interesting, however, is that the researchers made the following statement in the last sentence of the paper’s conclusions section:

All these findings suggest that the technique, despite its lack of familiarity, has merit for presenting and communicating data.

If you carefully read the paper, however, the results did not indicate that these graphs provide any actual benefits. So, what is it to which these researchers are referring as merit? Here’s the answer:

The prioritized viewing of CSs [connected scatterplots] – at least as compared to DALCs [dual-axis line charts] – makes them good candidates when the goal is to draw a viewer’s attention.

Later in the same paragraph, however, they make the following admission:

But it is not yet clear whether the preferential viewing arises from the technique per se, or its lack of familiarity.

What do they actually mean by “prioritized viewing” and “preferential viewing,” terms that suggest usefulness? Test subjects were shown screens consisting of six blurred thumbnail versions of charts—three connected scatterplots and three dual-axis line charts. The researchers told subjects that they were “studying the types of information that most interested them.” They were further told that they had five minutes and that, during that time, they could click on any thumbnail chart to view a larger, nonblurred version of it. Using eye-tracking technology to monitor the subjects, the authors found that, during the first half of the five-minute period, subjects spent more time on average looking at the connected scatterplots. This is what led the authors to recommend connected scatterplots’ “use for engagement and communication.” I don’t consider this a meaningful assessment of engagement, and certainly not of engagement that is useful, given the fact that subjects found these graphs difficult to understand.

We should be more clear and precise in our use of terms when they constitute the object of research or are promoted as beneficial. The degree to which and manner in which someone becomes engaged in viewing and interacting with a visualization is worth consideration, but only if we define engagement clearly. To me, engagement suggests more than attracting attention. It suggests sustained attention. For engagement to qualify as useful, it must involve productive thinking. For example, here’s a definition that might work in the context of data visualization:

Useful engagement with visualized data involves a sustained period of attention on the data or in interaction with the data that increases understanding.

If we measure this and seek to achieve this in our work, we’ll be doing something worthwhile.

Take care,

Signature

Journalistic Graphics with Integrity

December 16th, 2015

My friend and colleague Katherine Rowell of HealthDataViz sent me a link this morning to an article in the New York Times titled “The Experts Were Wrong About the Best Places for Better and Cheaper Health Care,” by Kevin Quealy and Margot Sanger-Katz (December 15, 2015). It caught her attention, in part because it involves healthcare, which is the focus of her work, but also because she thought the graphs that appear in the article were well done. After skimming the article, I agreed with her assessment if the graphs. I wasn’t surprised, for the New York Times produces some of the best journalistic graphics in the world. They don’t always get it right, in my opinion, but they usually do.

I thought I’d bring this article to your attention to illustrate how graphics can be used to complement a news story without the eye-candy that is often introduced by unskilled infographic designers. I also thought I’d invite you to use this as an opportunity to hone your own skills in critiquing the effectiveness of graphs. Based on my cursory review, only one potential problem caught my attention. Perhaps you’ll find it as well. If you look closely, you might come up with additional ways in which the graphs could have been improved to tell their stories more effectively. I won’t identify the problem that I noticed until later, in a couple days or so. In the meantime, review the article on your own and then post comments here to show your appreciation for particular aspects of the graphs or to suggest ways in which they could have been improved.

Take care,

Signature

It’s Time to Come Out from the Shadows

December 9th, 2015

In the most recent edition of the Visual Business Intelligence Newsletter, I critiqued a research study that was done by Michelle Borkin and several others titled “Beyond Memorability: Visualization Recognition and Recall.” The purpose of the article was to expose systemic problems that exist in the field of information visualization research and to encourage efforts to address these problems. I took great pains to point out that Michelle and her work is but an example of a widespread problem in our field. I have nothing against Michelle personally. As I explained in the article, I selected her paper as the object of my critique because it has received a great deal of media attention and it illustrates many of the problems that are rife within the infovis research community.

Speaking up about serious problems in one’s field is not a popular thing to do. I am the bearer of news that is uncomfortable for everyone in the community. I know that this is especially true for Michelle, which gives me no pleasure. Unfortunately, there is no way that these problems can be addressed solely in the abstract. Real examples of ill-conceived and dysfunctional research must be identified and dissected to make these problems tangible. Everything that I wrote was true and, in my opinion, needed to be said. Nothing was said lightly.

I received an email from a respected friend and colleague in the field today who expressed his concern that I may have inadvertently crossed a line when I wrote the following [emphasis his]:

Borkin didn’t produce a flawed study because she lacks talent. As a doctoral student she did a study title “Evaluations of Artery Visualizations for Heart Disease Diagnosis” that was exceptionally worthwhile and well done. In that study, she showcased her strengths. I suspect that her studies of memorability were dysfunctional because she lacked the experience and training required to do this type of research. She is now an Assistance Professor at Northeastern University, teaching the next generation of students. I’m concerned that she will teach them to produce pseudo-science. This is a depressing cycle. Too many academics are supervising research studies that fall outside of their areas of expertise. Isn’t it time to break this cycle?

Here is an excerpt from my response to my friend:

I am quite sincere in my concern that we are training a new generation of poor infovis researchers…The fine work that people like you…and a few others are doing in the field is not turning the tide. Most people working in the field today are ill-equipped and that won’t change without addressing the systemic problem. I would like to see infovis research get on track in my lifetime. I’m doing what I can to turn the tide. I know that my efforts aren’t popular among many in the community. I’m convinced, however, that until others in the community begin to voice their concerns, the onus falls on me to do what I can in the only way I know how. For years I’ve been inviting people like you who are respected in the community and share my concerns to speak up. You have such a voice. Please raise it publicly to address these problems in your own way. 

You see the dilemma that I face. If you have advice to offer, I’ll welcome it. I invite this sincerely. I derive no pleasure from being the voice of a strident reformer. I couldn’t live with myself, however, if I stood by and did nothing. This is my work. It’s what I have to offer the world. I’m trying to do it well.

What I wrote in the newsletter article is but one of many examples of similar critiques that I’ve written over the years. If you’re familiar with my work, you know that I do more than criticize—I provide thoughtful analysis and suggest solutions. I have always invited the community to respond, but have rarely received a public response. What I have often heard is that my words sparked private discussions—usually angry. That isn’t helping.

Something needs to be done and it must be done in the daylight, not in the shadows. The focus must be on the problems that I’ve exposed and how we can address them. This is not about me. This is not about Michelle. This is about an important field of study that we all care about deeply. It is about future generations of infovis researchers who could be solving real problems in the world. We have a responsibility to fix the systemic problems in our field. I have worked tirelessly to address these problems and have always tried to do so with integrity. I have spoken publicly, setting myself up as the target of anger from people who should be doing something to address the problems. I’m trying to be a part of the solution. What are you doing? If you have anything useful to say, it’s time to say it. If my assessment of our situation is wrong, let me know. If my assessment is correct and you have solutions, let me know. Please, let me know and in so doing, help me open a door to greater contributions from our field.

Take care,

Signature

Two Highlights of IEEE’s 2015 VisWeek Conference

November 30th, 2015

Each year IEEE’s VisWeek Conference features a few information visualization research papers that catch my interest. This year, two papers in particular stood out as useful and well done:

Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations,” by Kanit Wongsuphasawat, Dominik Moritz, Anushka Anand, Jock Mackinlay, Bill Howe, and Jeffrey Heer

Automatic Selection of Partitioning Variables of Small Multiple Displays,” by Anushka Anand and Justin Talbot

My recent book Signal: Understanding What Matters in a World of Noise teaches a comprehensive approach to visual exploratory data analysis (EDA). Its purpose is, in part, to encourage data sensemakers to examine their data both broadly and deeply before beginning to look for signals. The paper “Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations” addresses the observation that most data sensemakers, especially those who have not been trained in statistics, tend to explore data in narrow ways. As a result, they tend to miss many potential discoveries and to understand their data superficially, at best. Voyager is a visual EDA tool that the authors developed to test features that encourage and support broader data exploration. The tool includes a recommendations engine that does a good job of determining useful views of the data from many perspectives (i.e., faceted views), which it then automatically suggests to the user in an organized manner. Unlike misguided attempts to do this by a few commercial products that I’ve seen, I appreciate the fact that the auto-generated views of Voyager are 1) well chosen, rather than just a gallery of possible views, 2) organized in a manner that makes them easy and efficient to navigate, and 3) suggestive without being restrictive. Regarding the third point, the interface allows users to depart from the suggested views to pursue specific questions as they arise, and does so in a manner that isn’t confining. It also allows users to get back to the broader assortment of views with ease. This beautifully supports the perpetual need of data sensemakers to shift between high-level (summary) and low-level (detail) views with ease.

I’m particularly sensitive to the fact that good EDA tools support the user’s thinking process, making suggestions and doing much of the grunt work in a way that never takes control from the user and never subverts the thinking process. Good tools recognize that humans must do the thinking by augmenting the analytical process without becoming overbearing. The authors of this research seem to share my concern, which is no doubt because they actually understand the analytical process based on a great deal of thought and experience.

The second paper, “Automatic Selection of Partitioning Variables for Small Multiple Displays,” is akin to the first in that it identified a real need, addressed it thoughtfully, and supports data sensemakers with useful suggestions.

Small multiples provide a powerful means of displaying data that could not be effectively shown in a single graph. They enable rich comparisons in a way that suits the human brain. The standard approach to splitting data into small multiples is to base each on a single item of a categorical variable. For example, a series of small multiples showing sales data might split the data into a separate view for each product or for each region. What the authors of this paper recognize, however, is that in some data sets there are meaningful subsets of values that could only be seen if each were displayed in a separate view. Consequently, it would be helpful if an analytical tool did a little searching of its own for potentially meaningful patterns and clusters based on all related variables and then suggest small multiples to feature them. Unlike the typical use of small multiples, the division of data into separate views might be based on specific ranges of a quantitative variable rather than items of a categorical variable. This research project attempts to find meaningful clusters and patterns algorithmically that might otherwise remain hidden from view and suggest them.

Let’s consider an example that appears in the paper. Here’s a scatter plot that shows the relationship between admission rates and graduation rates at various U.S. universities:

Initial Scatter Plot

What’s not obvious when viewing this scatter plot is that there are three groups of universities that exhibit different graduation rate patterns. Although we can’t detect them with our eyes, a good algorithm could detect them for us. These particular groups are related to the ACT scores of students who are admitted to those universities. The different patterns are revealed when the universities are split based on the following ACT score bins: 10-20, 20-30, and 30-36. Here are the results as a series of small multiples based on these bins:

Small Multiple Display

Now we can see that, when segregated into these bins, meaningful patterns emerge. Universities that admit students with the highest ACT scores tend to have high graduation rates, regardless of admission rates, and those with the lowest ACT scores tend to have low graduation rates, regardless of admission rates. Universities that admit students in the 20-30 ACT score range tend to avoid the extremes at both ends of the graduation rate continuum. It isn’t likely that we would have ever found this relationship without the help of the algorithm, because we would probably have never binned ACT scores in this particular way. The authors are proposing a way to enable an analytical tool to find algorithmically what we might not discover on our own during an exploratory journey. It then presents the results to us so we can use our own eyes and brains to confirm the findings and investigate them further. This is a worthwhile use of technology and a well-designed solution.

The authors call the smarts behind this work a “goodness-of-split algorithm” (a clever pun on “goodness of fit”). It ranks potential ways of splitting a scatter plot into small multiples based on four criteria:

  • Visually rich
    How well the split presents visually rich patterns that can be perceived by the human visual system.
  • Informative
    How well the split reveals information that cannot be seen in the existing scatter plot.
  • Well-supported
    How well the split reveals patterns that are meaningful, as opposed to spurious patterns based on random variation.
  • Parsimonious
    A split that results in the fewest number of small multiples without sacrificing anything useful.

I won’t try to explain how the algorithm works, but you can rest assured that it is based on a deep understanding of statistics and visual perception. If you love elegant algorithms, read the paper for an explanation that will appeal to the computer scientist in you.

Other worthwhile papers were presented this year at VisWeek, but these are the two that piqued my interest most. Unsurprisingly, a few of this year’s papers caught my attention for the wrong reasons. My final newsletter article of this year, which will be published tomorrow, will critique one of those papers as a cautionary tale about research gone awry.

Take care,

Signature

What Is the Best Response to Bad Practices?

October 28th, 2015

During the last two days, I spent a great deal of time corresponding with my friend Alberto Cairo after he informed me that he was hosting a public lecture by David McCandless at the University of Miami. Alberto and I are both critical of McCandless’ infographics. I am more passionate in my criticism, however, perhaps because I frequently and directly encounter the ill effects of McCandless’ influence. More than anyone else working in data visualization today, McCandless has influenced people to design data visualizations in ways that are eye-catching but difficult to read and often inaccurate. Also more than anyone else, when my readers and students talk about the challenges that they face in the workplace because their bosses and clients expect eye-candy rather than useful information effectively displayed, they identify McCandless as the source of this problem.

You can imagine my dismay when Alberto told me about the lecture. I argued that he shouldn’t provide McCandless with a forum for promoting his work unless he also provided a critique of that work during the event. Alberto’s position was that, as an academic and a journalist, he should provide a platform for anyone whose work in the field of data visualization is known, regardless of quality or the harm that it does. Further, he argued that his students and those who have read his books already know that he finds much of McCandless’ work lacking. My response to this was, “What about those who attend the event but are not your students or readers?” After the discussion, I found myself wanting to ask one more question: “What do I say to someone who tells me that his boss attended the lecture, and this exposure to McCandless’ work set his efforts to promote effective practices back by several years?” Even worse, what if he also says, “Steve, I encouraged my boss to attend the lecture because it was hosted by Alberto Cairo, whose work you’ve praised.”

To no avail, I pleaded with Alberto to provide a counterpoint to the presentation to make it clear to attendees that McCandless often promotes practices that are ineffective. I argued that without providing this counterpoint, he was abdicating his responsibility as a teacher and a journalist. He saw it differently. He replied that his indirect approach to combating ineffective practices is perhaps more effective than my direct approach.

Is Alberto right? Was it appropriate for him to host a public lecture by McCandless without offering a counterpoint? Should I become less direct in my criticism of harmful practices? Will they cease to plague our work faster if I do? What does your experience tell you?

Take care,

Signature