Thanks for taking the time to read my thoughts about Visual Business Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions that are either too urgent to wait for a full-blown article or too limited in length, scope, or development to require the larger venue. For a selection of articles, white papers, and books, please visit my library.

 

To Err Is Human

June 28th, 2016

My revision of Alexander Popes words, “To err is human, to forgive, divine,” is not meant to diminish the importance of forgiveness, but instead to promote the great value of errors as learning opportunities. We don’t like to admit our mistakes, but it’s important that we do. We all make errors in droves. Failing to admit and learn from private errors may harm no one but ourselves, but this failure has a greater cost when our errors affect others. Acknowledging public errors, such as errors in published work, is especially important.

I was prompted to write this by a recent email exchange. I heard from a reader named Phil who questioned a graph that appeared in an early printing of my book Information Dashboard Design (First Edition). This particular graph was part of a sales dashboard that I designed to illustrate best practices. It was a horizontal bar graph with two scales and two corresponding series of bars, one for sales revenues and one for the number of units sold. It was designed in a way that inadvertently encouraged the comparison of revenues and unit counts in a way that could be misleading (see below).

Original

I would not design a graph in this manner today, but when I originally wrote Information Dashboard Design in 2005, I had not yet thought this through. This particular graph was further complicated by the fact that the scale for units was expressed in 100s (e.g., a value of 50 on the scale represented 5,000), which was a bit awkward to interpret. I fixed the dual-scale and units problem in the book long ago (see below).

Current

I began my response to Phil’s email with the tongue-in-cheek sentence, “Thanks for reminding me of past mistakes.” I had forgotten about the earlier version of the sales dashboard and Phil’s reminder made me cringe. Nevertheless, I admitted my error to him and now I’m admitting it to you. I learned from this error long ago, which relinquishes most of this admission’s sting. Even had the error persisted to this day, however, I would have still acknowledged it, despite discomfort, because that’s my responsibility to readers, and to myself as well.

When, in the course of my work in data visualization, I point out errors in the work of others, I’m not trying to discourage them. Rather, I’m firstly hoping to counter the ill affects of those errors on the public and secondly to give those responsible for the errors an opportunity to learn and improve. This is certainly the case when I critique infovis research papers. I want infovis research to improve, which won’t happen if poor papers continue to be published without correction. This was also the case when I recently expressed my concern that most of the books written about data visualization practices in the last decade qualify as “Data Visualization Lite.” I want a new generation of data visualization authors and teachers to carry this work that I care about forward long after my involvement has cease. I want them to stand on my shoulders, not dangle precariously from my belt.

Imagine how useful it would be for researchers to publish follow-ups to their published papers a few years after they’re published. Researchers could correct errors and describe what they’ve learned since publication. They could warn readers to dismiss claims that have since been shown invalid. They could describe how they would redesign the study if they were doing it again. This could contribute tremendously to our collective knowledge. How often, however, do authors of research papers ever mention previous work, except briefly in passing? What if researchers were required to maintain an online document that is linked to their published papers, to record all subsequent findings affecting the content of the original paper. As it is now, bad research papers never die. Most are soon forgotten, assuming they were ever noticed in the first place, but they’re often kept alive for many years through citations, even when they’ve been deemed unreliable.

A similar practice could be followed by authors of books. Authors sometimes do this to some degree when they write a new edition of a book. Two of my books are now in their second editions. Most of the changes in my new editions involve additional content and updated examples, but I’ve corrected a few errors as well. Perhaps I should have included margin notes in my second editions to point out content that was changed since the first to correct errors. This might be distracting for most readers, however, especially those who hadn’t read the previous edition, but I could provide a separate document on my website of those corrections for anyone who cares. Perhaps I will in the future.

Errors are our friends if we develop a healthy relationship with them. This relationship begins with acceptance, continues through correction, and lives on in the form of better understanding. Those who encourage this healthy relationship by opening their work to critique and by critiquing the work of others are likewise our friends. If I’ve pointed out errors in your work, I’m not your enemy. If you persist in spreading errors to the world despite correction, however, you become an enemy to your readers.

Data visualization matters. It isn’t just a job or field of study, it’s a path to understanding, and understanding is our bridge to a better world.

Take care,

Signature

We Must Vet Our Data Visualization Advisers with Care

June 24th, 2016

When we need advice in our personal lives, to whom do we turn? To someone we trust, who has our interests at heart and is wise. So why then do we often rely on advisers in our professional lives whose interests are in conflict with our own? If your work involves business intelligence, analytics, data visualization, or the like, from whom do you seek advice about products and services? If you’re like most professionals, you unwittingly seek advice from people and organizations with incentives to sell you something. You either get advice from the vendors themselves, from technology analysts with close ties to those vendors, or from journalists who are secretly compensated by those vendors. That’s not rational, so why do we do it? Usually because it’s convenient and sometimes because we don’t really care if the advice is good or not, for it is our employers, not us, who will suffer the consequences. If we actually care, however, we should do a better job of vetting our advisers.

It should be obvious that we cannot expect objectivity from the vendors themselves. Even when a vendor’s employees post advice from independent websites and claim that their opinions are their own, they remain loyal to their employers. In fact, it’s a great marketing ploy for vendors to have their employees post advice from independent sites rather than from their own. It suggests a level of objectivity that serves the vendor’s interests and multiples their presence on the web. We must also question with similar suspicion the objectivity of consultants and teachers who have built their work around a single product.

What about technology analyst groups, such as Gartner, Forrester, and TDWI, to name a few of the big guys? These organizations fail in many ways to maintain a healthy distance from the very technology vendors that are the subject of their advice. In fact, they are downright cozy with the vendors.

Trustworthy technology advisers go to great pains to maintain objectivity. They are few and far between. To be objective, I believe that advisers should do the following:

  • Disclose all of their relationships with vendors. This is especially true of relationships that involve the exchange of money. If they accept money from vendors, they should willingly disclose the figures upon request.
  • Do not allow vendors to advertise on their websites, in their publications, or at their events.
  • Only accept payments from vendors for professional services specifically rendered to improve the vendor’s products or services. Payments for marketing advice does not qualify.
  • Do not publish content prepared by vendors.

Try to find technology analysts and journalists who follow these guidelines. Even with diligent effort, you won’t find many, because there aren’t many to find.

Try an experiment. If your company subscribes to one of the big technology analyst services (Gartner, etc.), next time they produce a report that scores BI, analytics, or data visualization products, ask them for a copy of the data on which they based those scores, along with the algorithms that processed the data. This is likely done in an Excel spreadsheet, so just ask them for a copy of the file. After making the request, watch them squirm and expect creative excuses. Most likely they’ll say something along these lines: “Our scoring system is based on a sophisticated and proprietary algorithm that we cannot make public because it gives us an edge over the competition.” Bullshit. There is definitely a secret in that spreadsheet that they don’t want to share, but it is not a sophisticated algorithm.

After they refuse to show their work, move on to the following request: “Please give me a list of the vendors that you evaluated along with the amount of money that you have received from each for the last few years.” They won’t give it to you, of course, and they’ll explain that they cannot for reasons of confidentiality. Think about that for a moment. It is no doubt true that they promised to never reveal the money that changed hands between them and the vendors, but shouldn’t this clear conflict of interest be subject to scrutiny? Technology analysts and the vendors that they support are not fans of transparency.

There are a few technology advisers who do good work and do it with integrity. If you want objective and expert advice from someone who is looking out for your interests, be sure to vet your advisers with diligence and care. Question their motives. If it looks like they’re acting as an extension of vendor marketing efforts, they probably are. If, on the other hand, you’re just looking for easy answers, abandon all skepticism and do a quick Google search and then read the advice that receives top ranking. Or, better yet, schedule a call with the analyst group for whose advice you pay dearly in the form of an annual subscription.

Take care,

Signature

(Postscript: Yes, I consider myself one of the few data visualization advisers in whom you can trust.)

Data Visualization Lite

June 13th, 2016

In the world of data visualization, we are progressing at a snail’s pace. This is not the encouraging message that vendors and many “experts” are promoting, but it’s true. In the year 2004, I wrote the first edition of Show Me the Numbers in response to a clear and pressing need. At the time no book existed that pulled together the principles and best practices of quantitative data presentation and made them accessible to the masses of mostly self-trained people who work with numbers. I was originally inspired by the work of Edward Tufte, but realized that his work, exceptional though it was, awed us with a vision of what could be done without actually showing us how to do it. After studying all of the data visualization resources that I could find at the time, I pulled together the best of each, combined it with my own experience, gave it a simple and logical structure, and expressed it comprehensibly in accessible and practical terms. At that time, data visualization was not the hot topic that it is today. Since then, as the topic has ignited the imagination of people in the workplace and become a dominant force on the web, several books have been written about quantitative data presentation. I find it disappointing, however, that almost nothing new has been offered. With few exceptions, most of the books that have been written about data visualization, excluding books about particular tools or specific applications (e.g., dashboard design), qualify as data visualization lite.

Those books written since 2004 that aren’t filled with errors and poor guidance, with few exceptions, merely repeat what has been written previously. Saying the same old thing in a new voice is not helpful unless that new voice reaches an audience that hasn’t already been addressed or expresses the content in a way that is more informative. Most of the new voices are addressing data visualization superficially, appealing to an audience that desires skill without effort. As such, they dangle a false promise before the eager eyes of lazy readers. Data visualization lite is not a viable solution to the world’s need for clear and accurate information. Instead, it is a compromise tailored to appeal to short attention spans and a desire for immediate expertise, which isn’t expertise at all.

In a world that longs for self-service business intelligence, naively placing data sensemaking and communication in the same category as pumping gas, we need fresh voices to proclaim the unpopular truth that these skills can only be learned through thoughtful training and prolonged practice. It is indeed true that many people in our organizations can learn to analyze and present quantitative data effectively, but not without great effort. We don’t need voices to reflect the spirit of our time; we need voices to challenge that spirit—voices of transformation. Demand depth. Demand lessons born of true expertise. Demand evidence.

Where are these fresh and courageous voices? Who will light the way forward? There are only a few who are expressing new content, addressing new audiences, or expressing old content in new and useful ways. Until we demand more thoughtful and transformative work, the future of data visualization will be dim.

Take care,

Signature

Avoiding Quantitative Scales That Make Graphs Hard to Read

May 24th, 2016

This blog entry was written by Nick Desbarats of Perceptual Edge.

Every so often I come across a graph with a quantitative scale that is confusing or unnecessarily difficult to use when decoding values. Consider the graph below from a popular currency exchange website:

Example of poorly chosen quantitative scale
Source: www.xe.com

Let’s say that you were interested in knowing the actual numerical value of the most recent (i.e., right-most) point on the line in this graph. Well, let’s see, it’s a little less than halfway between 1.25 and 1.40, so a little less than half of… 0.15, so about… 0.06, plus 1.25 is… 1.31. That feels like more mental work than one should have to perform to simply “eyeball” the numerical value of a point on a line, and it most certainly is. The issue here is that the algorithm used by the graph rendering software generated stops for the quantitative scale (0.95, 1.10, 1.25, etc.) that made perceiving values in the graph harder than it should be. This is frustrating since writing an algorithm that generates good quantitative scales is actually relatively straightforward. I had to develop such an algorithm in a previous role as a software developer and derived a few simple constraints that consistently yielded nice, cognitively fluent linear scales, which I’ve listed below:

1. All intervals on the scale should be equal.

Each interval (the quantitative “distance” between value labels along the scale) should be the same. If they’re not equal, it’s more difficult to accurately perceive values in the graph, since we have to gauge portions of different quantitative ranges depending on which part of the graph we’re looking at (see example below).

Unequal intervals
Source: www.MyXcelsius.com

2. The scale interval should be a power of 10 or a power of 10 multiplied by 2 or 5.

Powers of 10 include 10 itself, 10 multiplied by itself any number of times (10 × 10 = 100, 10 × 10 × 10 = 1,000, etc.), and 10 divided by itself any number of times (10 ÷ 10 = 1, 10 ÷ 10 ÷ 10 = 0.1, 10 ÷ 10 ÷ 10 ÷ 10 = 0.01, etc.). We find it easy to think in powers of 10 because our system of numbers is based on 10. We also find it easy to think in powers of 10 multiplied by 2 or 5, the two numbers other than itself and 1 by which 10 can be divided to produce a whole number (i.e., 10 ÷ 2 = 5 and 10 ÷ 5 = 2). Here are a few examples of intervals that can be produced in this manner:

Sample Powers of 10

Here are a few examples good scales:

Good Scales

Here are a few examples of bad scales:

Bad Scales

After this post was originally published, astute readers pointed out that there are some types of measures for which the “power of 10 multiplied by 1, 2 or 5” constraint wouldn’t be appropriate, specifically, measures that the graph’s audience think of as occurring in groups of something other than 10. Such measures would include months (3 or 12), seconds (60), RAM in Gigabytes (4 or 16) and ounces (16). For example, a scale of months of 0, 5, 10, 15, 20 would be less cognitively fluent than 0, 3, 6, 9, 12, 15, 18 because virtually everyone is used to thinking of months as occurring in groups of 12 and many business people are used to thinking of them in groups of 3 (i.e., quarters). If, however, the audience is not used to thinking of a given measure as occurring in groups of any particular size or in groups that number a power of 10, then the “power of 10 multiplied by 1, 2 or 5” constraint would apply.

3. The scale should be anchored at zero.

This doesn’t mean that the scale needs to include zero but, instead, that if the scale were extended to zero, one of the value labels along the scale would be zero. Put another way, if the scale were extended to zero, it wouldn’t “skip over” zero as it passed it. In the graph below, if the scale were extended to zero, there would be no value label for zero, making it more difficult to perceive values in the graph:

Extended scale does not include zero stop
Source: www.xe.com, with modifications by author

In terms of determining how many intervals to include and what quantitative range the scale should span, most graph rendering applications seem to get this right, but I’ll mention some guidelines here for good measure.

Regarding the actual number of intervals to include on the scale, this is a little more difficult to capture in a simple set of rules. The goal should be to provide as many intervals as are needed to allow for the precision that you think your audience will require, but not so many that the scale will look cluttered, or that you’d need to resort to an uncomfortably small font size in order to fit all of the intervals onto the scale. For horizontal quantitative scales, there should be as many value labels as possible that still allow for enough space between labels for them to be visually distinct from one another.

When determining the upper and lower bounds of a quantitative scale, the goal should be for the scale to extend as little as possible above the highest value and below the lowest value while still respecting the three constraints defined above. There are two exceptions to this rule, however:

  1. When encoding data using bars, the scale must always include zero, even if this means having a scale that extends far above or below the data being featured.
  2. If zero is within two intervals of the value in the data that’s closest to zero, the scale should include zero.

It should be noted that these rules apply only to linear quantitative scales (e.g., 70, 75, 80, 85), and not to other scale types such as logarithmic scales (e.g., 1, 10, 100, 1,000), for which different rules would apply.

In my experience, these seem to be the constraints that major data visualization applications respect, although Excel 2011 for Mac (and possibly other versions and applications) happily recommends scale ranges for bar graphs that don’t include zero, and seems to avoid scale intervals that are powers of 10 multiplied by 2, preferring to use only powers of 10 or powers of 10 multiplied by 5. I seem to be coming across poorly designed scales more often, however, which is probably due to the proliferation of small vendor, open-source and home-brewed graph rendering engines in recent years.

Nick Desbarats

Is the Avoidance of 3-D Bar Graphs a Knee-Jerk Reaction?

May 6th, 2016

This is my response to a recent blog article by Robert Kosara of Tableau Software titled “3D Bar Charts Considered Not that Harmful.” Kosara has worked in the field of data visualization as a college professor and a researcher for many years, first at the University of North Carolina and for the last several years at Tableau. He’s not a fly-by-night blogger. But even the advice of genuine experts must be scrutinized, for gaps in their experience and biases, such as loyalties to their employers, can render their advice unreliable.

It has become a favorite tactic of information visualization (infovis) researchers to seek notoriety by discrediting long-held beliefs about data visualization that have been derived from the work of respected pioneers. For example, poking holes in Edward Tufte’s work in particular now qualifies as a competitive sport. Tufte’s claims are certainly not without fault. Many of his principles emerged as expert judgments rather than from empirical evidence. Most of his expert judgments, however, are reliable. While we should not accept anyone’s judgment as gospel without subjecting it to empirical tests, when we test them, we should do so with scientific rigor. Most attempts by researchers to discredit Tufte’s work have been based on sloppy, unreliable pseudo-science.

Back to Kosara’s recent blog article. Here’s the opening paragraph:

We’ve turned the understanding of charts into formulas instead of encouraging people to think and ask questions. That doesn’t produce better charts, it just gives people ways of feeling superior by parroting something about chart junk or 3D being bad. There is little to no research to back these things up.

We should certainly encourage people to use charts in ways that lead them to think and ask questions. Have you ever come across anyone who disagrees with this? Apparently the formulaic understanding of charts that “we” have been promoting produces a sense of superiority, evidenced by the use of terms such as “chart junk,” coined by Tufte. Kosara’s blog entry was written in response to Twitter-based comments about the following 3-D bar graph:

Trivapro Graph

As you can see, this is not an ordinary 3-D bar graph. It starts out as a fairly standard 2-D bar graph on the left and then takes a sudden turn in perspective to reveal an added dimension of depth to the bars that shoot out from the page. Kosara describes the graph as follows:

At first glance, it’s one of those bad charts. It’s 3D, and at a fairly extreme angle. The perspective projection clearly distorts the values, making the red bar look longer in comparison to its real value difference. The bars are also cut off at the base, at least unless you consider the parts with the labels to be the bottoms of the bars (and even then, they’re not the full length to start at 0).

But then, what is this supposed to show? It’s about the fact that a fungicide names [sic] Trivapro produces more yield than the two other products or no treatment. There is no confusion here about which bar is longer. And the values are right there on the chart. You can do some quick math to figure out that a gain of 32 over the base of 146 is an increase of a bit over 20%…

Based on Kosara’s own description, this graph does not communicate clearly and certainly isn’t easy or efficient to read. He goes on to admit this fact more directly.

Is this a great chart? No. It’s not even a good chart. Is this an accurate chart? No. Though it has the numbers on it, so it’s less bad than without.

Lest we rashly render the judgment that this graph deserves, Kosara cautions, “It is much less bad than the usual knee-jerk reaction would have you think, though.” Damn, it’s too late. My knee already jerked with abandon.

The gist of Kosara’s article is two-fold: 1) 3-D graphs are not all that bad, and 2) we should only be concerned with problems that researchers have confirmed as real. It would be great if we could rely on infovis researchers to serve as high priests of graphical Truth, but relatively few of them have been doing their jobs. His own recent studies and the others that he cites in the article are fundamentally flawed. This includes the respected study on which Kosara bases his claim that 3-D effects in graphs are “not that harmful,” titled “Reading Bar Graphs: Effects of Extraneous Depth Cues and Graphical Context” by Jeff Zacks, Ellen Levy, Barbara Tversky, and Diane Shiano This paper, published in the Journal of Experimental Psychology: Applied in 1998, missed the mark.

The 1998 study consisted of five experiments. The first two experiments contain the findings on which Kosara bases his defense of 3-D bar graphs. In the first experiment, test subjects were shown a test bar in a rectangular frame, which was rendered in either 2D or 3D. My reproductions of both versions are illustrated below.

Reproduction of 2-D and 3-D bars used in study

By only slightly manipulating the perspective of the 3-D bar, it was kept as simple as possible, giving it the best possible chance of causing no harm. Subjects were then asked to match the test bar to one of the bars in a separate five-bar array. The bars in the array ranged in height from 20 millimeters to 100 millimeters in 20-millimeter increments. Two versions of the five-bar array were provided—one with 2-D bars and one with 3-D bars—one on each side of a separate sheet of paper. Half of the time the test bar was shown alone, as illustrated above, and the other half a second context bar appeared next to the test bar, but the test bar was always marked as being the one that should be matched to a bar in the five-bar array. The purpose of the context bar was to determine if the presence of another bar of a different height from the test bar had an effect on height judgments. This experiment found that subjects made greater errors in matching the heights of 3-D vs. 2-D bars, as expected. It also found that the presence of context bars had no effect on height judgments.

It was the second experiment that led Kosara to claim that 3-D effects in bars ought not to concern us. The second experiment was exactly like the first, except for one difference that Kosara described as the addition of a “delay after showing people the bars.” He went on to explain that this delay eliminated differences in height judgments when viewing 2-D vs. 3-D bars, and further remarked, “That is pretty interesting, because we don’t typically have to make snap judgments based on charts.” Even on the surface this comment is wrong. When we view bar graphs, the perceptual activity of reading and comparing the heights of the bars is in fact a snap judgment. It is handled rapidly and pre-attentively by the visual cortex of the brain, involving no conscious effort. The bigger error in Kosara’s comment, however, is his description of the second experiment as the same as the first except for a “delay” after showing people the bars. The significant difference was not the delay, however, but the cause for the delay. After viewing the test bar, subjects were asked to remove it from view by turning the sheet of paper over, placing it on the desk, and then retrieving a second sheet on which the test bar no longer appeared before looking at the five-bar matrix to select the matching bar. In other words, when they made their selection the test bar was no longer visible, which meant that they were forced to rely on working memory as their only means of matching the test bar to the bar of corresponding height in the five-bar matrix.

When subjects were forced to rely on working memory rather than using their eyes to match the bars, errors in judgment increased significantly overall. In fact, errors increased so significantly that the difference seen in the first experiment related to 2-D vs. 3-D bars disappeared. Put differently, the increase in judgment errors increased so much when relying on working memory that the lesser differences based on 2D vs. 3D became negligible in comparison.

Another difference surfaced in the second experiment, which Kosara interpreted as further evidence that 3-D effects shouldn’t concern us when compared to greater problems.

The other effect is much more troubling, though: neighboring bars had a significant effect on people’s perception. This makes sense, as we’re quite susceptible to relative size illusions like the Ebinghaus [sic] Effect (in case you haven’t seen it, the orange circles below are the same size).

Ebbinghaus Illusion

What this means is that the data itself causes us to misjudge the sizes of the bars!

Where to begin? The Ebbinghaus Illusion pertains specifically to the areas of circles, not the lengths of bars. Something similar, called the Parallel Lines Illusion, was what concerned the authors of the 1998 study (see below).

Parallel Lines Illusion

Most people perceive the right-hand line in the frame on the left as longer than the right-hand line in the frame on the right, even though they are the same length. As you can see in my illustration below, however, this illusion does not apply to lines that share a common baseline and a common frame, as bars do. The second and fourth lines appear equal in height.

Parallel Lines Illusion Doesn't Apply to Bar Graphs

Also, if the presence of context bars caused subjects to make errors in height judgments, why wasn’t this effect found in the first experiment? Let’s think about this. Could the fact that subjects had to rely on working memory explain the increase in errors when context bars were present? You bet it could. The presence of two visual chunks of information (the test bar and the context bar) in working memory rather than one (the test bar only) increased the cognitive load, making the task more difficult. The second experiment revealed absolutely nothing about 2-D vs. 3-D bars. Instead, it confirmed what was already known: working memory is limited and reliance on it can have an effect on performance.

In the last paragraph of his article, Kosara reiterates his basic argument:

It’s also important to realize just how little of what is often taken as data visualization gospel is based on hearsay and opinion rather than research. There are huge gaps in our knowledge, even when it comes to seemingly obvious things. We need to acknowledge those and strive to close them.

Let me make an observation of my own. It is important to realize that what is often claimed by infovis researchers is just plain wrong, due to bad science. I wholeheartedly agree with Kosara that we should not accept any data visualization principles or practices as gospel without confirming them empirically. However, we should not throw them out in the meantime if they make sense and work, and we certainly shouldn’t reject them based on flawed research. The only reliable finding in the 1998 study regarding 2-D vs. 3-D bars was that people make more errors when reading 3-D bars. Until and unless credible research tells us differently, I’ll continue to avoid 3-D bar graphs.

(P.S. I hope that Kosara’s defense of 3-D effects is not a harbinger of things to come in Tableau. That would bring even more pain than those silly packed bubbles and word clouds that were introduced in version 8.)

Take care,

Signature