Thanks for taking the time to read my thoughts about Visual Business Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions that are either too urgent to wait for a full-blown article or too limited in length, scope, or development to require the larger venue. For a selection of articles, white papers, and books, please visit my library.


Ambient Orb — For those who don’t have time for a dashboard

September 26th, 2007

A couple of years ago I had an email conversation with a fellow who worked for a dashboard software company about one of his customers. He said that this customer, a CEO, requested a dashboard that consisted of nothing but a single red traffic light if something was wrong, and was otherwise blank. After taking a moment to digest the idea, I responded that I was sure glad I didn’t work for that company. Any CEO who only wants information when something is wrong is missing much of what he needs to know about what’s going on.

I was reminded of this conversation when I ran across an ad for the “Ambient Orb” this week. It is a large sphere that changes colors based on data input, such as the state of the Dow Jones Industrial Average. Here’s what it looks like when it is dressed in radiant blue:

Ambient Orb

For those of you who prefer the ever popular traffic light colors of red, yellow, and green, it can be programmed to shine in this manner as well, as illustrated by this diverging range of colors for displaying the state of the stock market:

Orb Color Scale

According to the New York Times, “This is ‘ambient information’ — the newest concept in how to monitor everyday data. We’ve been cramming stock tips, horoscopes and news items onto our computers and cellphones — forcing us to peer constantly at little screens. What if we’ve been precisely wrong?” I guess the gist of this statement is that we should be looking at bigger displays, rather than squinting at those tiny screens. “People want information, but they don’t want to invest a lot of time in getting it,” says Ambient president David Rose. “This makes getting information a ‘glanceable’ thing.” And what a wealth of information it is that we can derive from a single color!

I suppose that if you want some ever-present object in your office to alert you to a particular single piece of ultimately summarized information, the ambient orb could do the job. For those who don’t have a moment to spare, you could use it to tell you if its worth your while to actually look at your dashboard today. (In case you don’t know me well enough to know for sure, be assured that I am dripping with sarcasm right now.) Just as the dashboard can serve as a high-level front-end to a richer, more detailed fount of data, so can the ambient orb serve as the single-value front-end to your dashboard. Think of it as a mood ring on steroids.

Why not place one next to your bed and let it tell you whether its worth getting up in the morning, perhaps based on the weather forecast? I suppose that this is one way to deal with information overload, but are we really so busy and overwhelmed that we would choose to reduce information to this level? I’m planning to save my money ($150, plus shipping) for now, but if someone ever comes up with a lava lamp version, I might be tempted to buy.

Take care,


Coming Visualization Events — InfoVis and VizThink

September 10th, 2007

I’d like to draw your attention to two visualization events that will be taking place in the next few months: InfoVis 2007 and VizThink 2008. Though quite different, these conferences are both worthwhile if you’re interested in the full spectrum of how visualizations can be used to help us think and communicate.

InfoVis 2007 (October 28 – November 1 in Sacramento, California) is this year’s version of an annual conference that is sponsored by IEEE. It is perhaps the best opportunity for people throughout the world who are involved in information visualization research and development to get together and showcase what they’re doing. This is where I go to stay in touch with the cutting edge of visualization research. Despite its academic and technical orientation, I believe that all software vendors who have an interest or any level of involvement in visualization ought to attend. This is a place where you can get to know people who really understand visualization and get exposed to great ideas that have not yet found their way into commercial software. I would especially love to see representatives there from every business intelligence vendor that sells software for reporting and analysis. I’ll be teaching a tutorial at the conference designed to help bridge the gap between infovis researchers and commercial software, and will also be delivering the capstone presentation, which will feature the infovis highlights of 2007–the good, the bad, and the just plain ugly.

VizThink is a brand new conference, which will have its inaugural conference January 27-29, 2008, in San Francisco. It is shaping up to include an interesting and ecclectic assembly of visualization experts. This event will bring together people who represent all the main ways that visualization can be used to enhance thinking and communication, including infographics, information visualization, mind mapping, and even comics. Along with me, featured speakers whose work I already know include Scott McCloud, Robert Horn, Nigel Holmes, Eileen Clegg, Dave Gray, and Nancy Duarte.

Take care,


Taunts from the playground

August 3rd, 2007

Having convictions, staying true to them, and stating them publicly doesn’t always endear you to people. I discovered today that Dennis A. Ross of Data Analysis & Research is definitely not a member of my fan club. In fact, he seems to to have his eye on the presidency of the “Stephen Few is a Douche Bag Club” — not my words but his (the douche bag part, that is). He wrote the following review of my book Information Dashboard Design, which I will quote here in full, without even charging Ross for the free advertising.

Review: Information Dashboard Design by Stephen Few

Posted on August 2nd, 2007 by dennis

In short, this is a pretty good book by a pretty big douchebag.

The good: What’s nice about this book is that it incorporates a number of the classic information design perspectives (Tufte, Brath & Peters, et al.) and the basics of design. If Tufte makes art books, this is a kind of a Chilton manual. It is a solid primer for dashboard design, and has a few nuggets I have not found elsewhere. I have already recommended it to a couple of clients.

The bad: It can be ridiculously academic. There are some heavily stressed and frequently reiterated points that are sometimes impractical or of questionable value. If my client wants his logo in the upper right (some of the most valuable real estate on the dashboard according to Few), then by god he or she is going to get it there. Flashing. Scrolling. Animated. Whatever the !@#$ they want. Its not my job to convince them what good design is, or more frequently, screw up their corporate branding and presentation formats. He rails on against pie charts–and I understand his argument, but if my Fortune 50 VP of Sales wants a big !@#$ing pie chart with the first slice colored Barney-purple, they get it. It purely arrogant (not to mention financially detrimental) to try and lecture them on the value of bar charts in this crisis.

I am not sure why he is enamored with his “invention” of what he calls bullet graphs. They are worthless for presenting data in which you typically present lower numbers as a good thing–turnover ratio, call time wait. They are confusing for me, and I have been in the game 20 years, especially when multiple scales are involved.

The douchebaggery: The real fun comes on his website blogs at This is where he rails against Xcelsius and Oracle and god knows what. Then supports an incredibly visually ugly product like Tableau. Don’t get me wrong, Herr Few is talented…but quite clueless in a practical sense (in addition to no statistical chops). These products can be improved, yes, but they are light years ahead of what we have been working with in the past 10 years. More importantly, these products exist to serve customer or client needs, NOT Few’s design sensibilities. As things evolve they will undoubtedly get better, with or without his help. For the vast majority of analysis and reporting, these products suffice in spades.

In an effort to understand Ross’ hostility, I went to his website. Even before finding examples of what he considers good dashboard design (all created with Crystal Xcelsius), had I not been on a mission, I would have departed the site immediately, based on the Web design alone. Here’s the bottom portion of the home page:

Data Analysis and Research Home Page

Now, on to his dashboards. Three examples can be found on Ross’ website. In the interest of space, I’ll show only one, but you might find them all worth a gander.

Dashboard by Dennis Ross

I’ll allow Ross’ words and these examples of his work to speak for themselves. Here is the response that I posted in response to his review of my book. I think I exercised a great deal of restraint.

It isn’t often that someone gets emotional enough in response to my work to call me a douche bag. Having never heard of you, I was curious why you were so hostile. Having looked at your website and reviewed your dashboards, which all appear to have been created with Crystal Xcelsius, I now understand. I would invite anyone who wants to see what you consider a well designed website and a well designed dashboard to visit your site at

Keep giving your customers what they want, even when it doesn’t work, rather than taking responsibility as a consultant to add value. That may satisfy them for the moment, but it won’t help them in the least.

Stephen Few
Author of “Information Dashboard Design”

Take care,


P.S. Ross cites Edward Tufte as an exceptional information designer, which he definitely is, and even includes a link to Tufte’s discussion forum on his site. This puzzles me, because Tufte’s opinion of Ross’ dashboard’s would make mine seem kind.

Business Objects’ Bullet Graphs: A Good Idea, Implemented Poorly

August 3rd, 2007

A few days ago the following screenshot on Business Objects’ website was brought to my attention:

BI Annotator

I was pleased to see sparklines and bullet graphs in this example, because they can work quite effectively on dashboards, but I was bothered by several problems with the implementation. Being short of time, I asked Bryan Pierce, who works with me, to critique the dashboard, which he has done below.

Take care,



This screenshot is intended to demonstrate a new add-on for BusinessObjects XI called BI Annotator. It was the dashboard used to exhibit the add-on, however, and not the add-on itself, which caught my eye. Specifically, the column labeled “Current Performance.” Although this dashboard has multiple problems, such as column headers that are not aligned to match their data, the primary problem involves the implementation of the bullet graphs (that is, the linear bar-like graphs in the “Current Performance” column). Bullet graphs were invented by Stephen as a compact, data-rich, and efficient alternative to gauges on a dashboard. Below is an example of a bullet graph with labeled parts.

Labeled Bullet Graph

When Stephen created bullet graphs he gave an open invitation to software vendors to implement them and has since freely provided a design specification to anyone who is interested. As you can see, Business Objects has attempted to do this, but with limited success. They realized that bullet graphs can serve a useful purpose, but they haven’t taken the time to understand why they work and to design them according to that understanding.

The first problem with the bullet graphs in this example is that they lack a quantitative scale. Without a labeled scale you cannot make quantitative judgments about the bars. You can make qualitative judgments of “Bad,” “Fair,” and “Good,” based on the color-coding of the bars, the circle icons to the right of the bullet graphs, and background color in the bullet graphs, but to determine a quantitative value you must rely on the percentage column to the right, which tells you nothing about the values associated with the ranges of bad, fair, and good.

On a closer comparison of the percentages to the bar lengths, one might assume that Business Objects intentionally left off the quantitative scale to hide another problem: The bars do not have a zero baseline. It appears that the quantitative scale starts at approximately 80%. This creates two problems. First, it exaggerates the actual differences between the bar lengths. This problem can be seen by comparing the first two bars, Bluetooth and Core CDMA. Although the Core CDMA bar appears to be about twice the length of the Bluetooth bar, it represents a value that is only 11% greater. Second, what happens if a value is less than 80%? You might assume that the non-zero baseline would automatically be set to just below the lowest value of the bars, but, as a quick glance at the PTOM bullet graph will tell you, that assumption would be wrong. The PTOM graph represents a value of 0% and therefore has no bar at all. This value is 90 percentage points less than the next lowest value, but the difference in bar length is only about 1/5th of the difference in bar length between the second lowest value and the highest value (a value difference of 46%). Because Business Objects used a non-zero baseline and then implemented it sloppily, the differences in length between the bars are not only exaggerated, they are also inconsistent.

My next problem with the bullet graphs involves the use of color. As mentioned before, notice how the qualitative values of “Bad,” “Fair,” and “Good” have been tri-encoded on the bullet graphs. These values are encoded as the colors of the bars and circle icons (red for “Bad,” yellow for “Fair,” and green for “Good”), as well as in the intensity of the background color (dark gray, medium gray, and light gray for the respective qualitative values). One problem with this approach is that 10% of men and 1% of women are color blind, and most of them cannot distinguish green from red. Second, color coding has been overused, undermining its ability to function effectively. Often, when people over use color in a dashboard, they do so with the best of intentions. They believe that by color-coding everything, they will make the display easier to comprehend as a whole. Unfortunately, however, when everything stands out, nothing does. Because of this, bright, intense colors should be reserved only for highlighting objects that demand our attention. Everything else should be muted. Grays and soft colors like those found in nature work well. Below is a series of bullet graphs that Stephen designed for use as a section of a larger call center dashboard, which provide an example of appropriate color use.

Multiple Bullet Graphs

The different shades of gray in the background of the bullet graphs provide subtle qualitative encoding while color is used only for those measures that demand attention. It’s easier to look at these gray graphs than the brightly colored graphs in the Business Objects example and the items of most importance jump out more quickly and clearly. It also functions well for those who are color blind.

Another feature that Business Objects’ bullet graphs lack is an adequately labeled comparative measure. A comparative measure gives the bullet graph meaningful context. For instance, sales of $24.3 million are not very meaningful until you know how that compares to some other measure such as the target, last year’s sales, or average sales. In this case, we might assume that the little white bar located between the medium gray and light gray background areas is the comparative measure because it appears to be located at about the 100% mark (although again, this must be approximated because there is no quantitative scale). If we assume that this is the case, then we’re still left without appropriate context because the comparative measure is not defined. If it represents a target, 100% might be good, but if it represents last year’s performance, 100% might not be good: 100% of last year’s performance isn’t so good if you were expecting 10% growth. The comparative measures need to be adequately labeled and they should be slightly more prominent, so that they’re easier to spot. Additionally, they shouldn’t be built into the background, because there are bound to be cases where the breakpoint between “Fair” and “Good” and the target are different. Or at least, there should be.

But, maybe that’s the point. Perhaps, Business Objects is just trying to be “good enough” to turn a profit, instead of making an effort to excel. So often, software vendors choose the easy route, giving people more of the visual fluff they’re so used to, instead of creating products that actually function well, based on a keen understanding of what works and why.

Oracle/PeopleSoft Enterprise Service Dashboard — A disservice to people who desperately need better

July 30th, 2007

Thanks to a press release from Oracle last week, I found the two dashboard screen shots that appear below in a Product Sheet that describes the features of Oracle’s new PeopleSoft Enterprise Service Dashboard software. These examples feature a dashboard that is designed for monitoring call center activity—or at least this is their intent. I’m frustrated that software vendors continue to produce such anemic and poorly designed dashboard examples. Whether the software is capable of doing better, I don’t know, but these screen shots demonstrate no understanding whatsoever of effective dashboard design. Oracle should be embarrassed.

Oracle/PeopleSoft Enterprise Service Dashboard (Small)
(Click the dashboard images to enlarge.)

Due to a busy schedule, I asked Bryan Pierce, who works with me, to critique these examples. Until a few months ago when he began to work at Perceptual Edge, Bryan had no experience with graphical communication and had never seen a business intelligence dashboard. Today, he spends his days managing the day-to-day operations of Perceptual Edge (website, bookkeeping, etc.), but has been picking up data visualization skills on the side, mostly by reading my books and articles. I’m pointing this out because it’s worthwhile to note that it doesn’t take years of experience to develop the skills of graphical communication. Bryan’s critique begins below my signature, which I’m confident that you’ll find useful. Oracle could learn a thing or two from Bryan.


This review is based on the two images that were included in Oracle’s product data sheet. Unfortunately, due of their low-resolution, some of the text was illegible, so I wasn’t able to conduct as thorough of a review as I would have liked. Below is a list of the problems that I found.


  • All of the graphs use 3D to encode 2-D data. The third dimension adds no value but does succeed in making the values harder to decode.
  • Neither display is sufficiently data-rich. In creating a dashboard, it’s important to put all of information that people might want to compare together on a single screen. Even when people probably won’t compare values, it’s easier and more efficient for them to work with a well-designed single page than it is to work with multiple data-sparse screens. The size of the graphs alone—giant pies and single bars that take up a quarter of the screen—indicates that much more information could have be included on the display. It’s likely that, given a proper layout, multiple screens would not even have been necessary.
  • The borders between graphs are unnecessarily salient, which makes it more difficult to track between them to make comparisons.
  • All of the tick marks used are redundant; they are not necessary when gridlines are used.

Main Screen (first image):

  • Very little of the information is encoded visually, undermining much of the strength of a dashboard. The only information that is encoded visually is the information contained in the pie chart and the bar & line graph. The multiple tables require us to read the information (slow serial processing) instead of allowing us to see the information like graphs do (fast parallel processing).
  • The bar and line graph appears to display call volume (bars) and call capacity (line) as it changes through the day (although, due to the low resolution, I can’t be sure of this). By using bars and a line together, it creates the potential for confusion when they cross. For instance, look at the 5th bar from the left. If its magnitude was slightly higher, the top-left side of the bar would be above the line, but the top-right side would not. It could difficult to quickly determine whether the bar or the line represented a greater value, in some cases. The 3-D effect only adds to the problem. If only bars or only lines were used, this wouldn’t be a problem. If bars and lines are used, this problem could be alleviated by adding small data points to the line to assist in comparisons.
  • The pie chart appears to display a breakdown of calls by the originating region (US – Midwest, United Kingdom, etc.). Pie charts don’t work well because people have a difficult time accurately comparing area. When a third dimension of depth is added, this becomes even more difficult because the slices at the top of the pie are shrunken slightly while the bottom slices are slightly enlarged to provide the 3-D perspective. 2-D bar charts work much better than pie charts because, instead of relying on our poor ability to compare areas, they are based upon our ability to compare lengths, which we do quite well.
  • The gradients used in the background of the graphs are more visually salient than a solid light color would be and in some cases, they can misleadingly alter our perception of the data.
  • The background images are distracting and make it more difficult to read the data.

Drill-Down Screen (second image):

  • Notice the legend below the bottom-left bar graph. On many dashboards the colors green, red, and yellow are commonly used for encoding data. Red is usually used to represent “bad,” yellow “borderline” or “satisfactory,” and green “good.” The problem with this color-coding is that the 10% of men and 1% of women who are color-blind might not be able to tell the good from the bad. In this case, it appears that Oracle might have tried to avoid this by using red, blue, and light green (which could potentially be differentiated by the color-blind because the light green has a lower intensity than the red). Unfortunately, look at the labels on the legend. The red box is labeled “Green,” the blue box is labeled “Red,” and the light green box is labeled “Yellow.” This creates a big problem for the display. Studies have found that when the word for a color is different from the color of the word (for instance “green” is written in red text) it is significantly slower and more difficult to read. How many times would you need to go back to the legend before you would easily remember that the blue bar means “red?” This cross-coding is a significant problem for everyone, whether color-blind or not.
  • The red and blue colors used to encode the bars mean different things in different graphs. When the same color is used in multiple graphs, people naturally tend to assume that there is meaning behind this; in this case, they would attempt to find meaning where there is none. When multiple colors are used, care should be taken to prevent people from looking for connections that don’t exist. There is an exception to this rule: If everything is the same color. All of the bars could be the same color without any confusion. When everything is encoded with the same color, instead of just certain things, people aren’t inclined to see false relationships.
  • There is little point in encoding a single value as a bar. It takes at least as much time to decode as it would to just read the number (because you must refer to the scale). Once multiple bars are added, bar charts begin to shine because the approximate value of every bar can be understood by one or two quick glances at the scale and you can quickly make some comparisons between the bars without referring to the scale at all.
  • The bars on this screen all use transparency, which is another gratuitous effect that serves no positive purpose.
  • The gridlines are probably unnecessary in all of these graphs. If they are required, they should be lightened so that they are just light enough to be visible and no more.

Below, I have included an image of a call center dashboard that Steve created for his book Information Dashboard Design. Notice how much more information is provided on the screen: It’s not cluttered, but it is much more data-dense than the previous examples. Notice how much cleaner it looks; color is not being used gratuitously and there are no special effects to distract from the data.

Sample Telesales Dashboard by Stephen Few (small)
(Click the image to enlarge.)