Thanks for taking the time to read my thoughts about Visual Business Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions that are either too urgent to wait for a full-blown article or too limited in length, scope, or development to require the larger venue. For a selection of articles, white papers, and books, please visit my library.

 

Questions to Ask About Vendors When Evaluating Their Products

January 7th, 2010

Good products are usually developed by good companies. It would be difficult for a bad company—one that is poorly run—to develop a good product. When we evaluate products, in addition to looking at the products themselves, we can learn useful facts that might not be obvious by asking a few questions about the companies that produce them. Here are a few questions you might want to ask about a software vendor when evaluating one of its products.

  • Does the vendor have deep expertise in the domains that its products support? Does it exhibit this expertise, not only in its products, but in its communications as well, including marketing materials and sales presentations?
  • Does the vendor invest in the development of features and functions in its products that actually work and are actually needed by more than a few users?
  • Does the vendor exhibit a commitment to designing products to be as easy as possible to use?
  • Does the vendor develop products that nudge users in beneficial directions (that is, in directions that actually produce results that effectively serve their needs)?
  • Has the vendor defined its potential users clearly enough and gotten to know them well enough to develop the product in relevant ways?
  • Does the vendor refrain from making marketing claims that are false or otherwise misleading?
  • Does the vendor know how to tell the story of what its product does, how it works, and why it’s good? If it doesn’t, this is a sign that it doesn’t have a clear story to direct its efforts into a coherent product.
  • Does the vendor make it easy for potential buyers to evaluate its products?
  • Does the vendor help its users develop the conceptual skills (not just skills in using the software) that are necessary to use its products productively? For example, if it produces data analysis software, does it offer instruction in the principles and practices of analysis?
  • Does the vendor take the time to develop user documentation that is really helpful, with clear explanations and meaningful examples?
  • Does the vendor’s support mechanism (phone support, etc.) demonstrate that it genuinely wants to solve your problems rather than only provide the minimum support that customers will find tolerable?

I’m not suggesting that these are the only questions to ask about vendors. These are just a few that come to mind that could prove useful. Please feel free to add to and refine this list.

Take care,

Designing Effective Industrial Control System Displays

December 16th, 2009

The High Performance HMI Handbook
A Comprehensive Guide to Designing, Implementing and Maintaining Effective HMIs for Industrial Plant Operations

Bill Hollifield, Dana Oliver, Ian Nimmo, and Eddie Habibi, PAS, 2008

Dashboard displays come in many types, depending on the nature of the information that’s being monitored. While it’s true that all monitoring displays share many best design practices in common, each situation requires specialized designs as well. For example, an airplane cockpit display should look and function quite a bit differently than a business sales dashboard. A book entitled The High Performance HMI Handbook (2008) provides design guidance specifically for displays that are used by control operators in industrial plants. (HMI is an acronym for “Human Machine Interface”.) Apparently, the vendors that develop these systems are like most business intelligence vendors: they don’t understand how to present information effectively, especially for data monitoring and analysis. In fact, they promote really bad data presentation practices. Here’s an example of a typical industrial control room display.

If you’ve read my book Information Dashboard Design and now go on to read The High Performance HMI Handbook by Bill Hollifield, Dana Oliver, Ian Nimmo, and Eddie Habibi, you might think that either they or I copied the other’s material. When I wrote my book in 2006, however, I wasn’t familiar with the work of these authors, and I have no reason to believe that they were familiar with mine when they wrote their book last year. The reason the principles and practices presented in our books are so consistent with one another-in many cases down to precise details-is because we are drawing from the same research literature (human factors, human-computer interface design, cognitive science, information visualization, graphic design, etc.) and have both honed our expertise through years of designing practical, real-world data display solutions.

Where our books vary is due to differences between business dashboard requirements and displays that are used to monitor real-time industrial operations. Their book is rich in details that apply specifically to control room monitoring, down to the ideal configuration of display devices and the screen colors that provide optimal readability in a typical control room. It is because of the highly specific and therefore limited nature of this book’s audience that it bears the high price tag of $129.99. If you need to design displays for control operators, however, this price is a pittance compared to the benefits that you’ll derive from reading this book.

Let me share a few brief excerpts from the book to give you a peek into its contents.

Regarding the ineffective and irresponsible data display practices of the vendors that develop Distributed Control System (DCS) software:

There is a widespread need for the information in this book. It is not provided by the DCS manufacturers. In fact, the DCS manufacturers usually demonstrate their graphic capabilities using example displays violating almost every good practice for HMIs.

DCS vendors have now provided the capability of creating HMIs with extremely high sophistication, power, and usefulness. Unfortunately, this capability is usually unused, misused, or abused. The basic principles of effective displays are often not known or followed. Large amounts of process data are provided on graphics, but little information. Information is “data in context made useful.” In many cases, the DCS buyer will proceed to design and implement graphics based on the flashy examples created to sell the systems – unaware from a usability and effectiveness point of view, they are terrible.

Sound familiar? This next excerpt will sound familiar as well. Just as business intelligence vendors promote do-it-yourself solutions without providing the guidance that people need to analyze and present data effectively, DCS manufacturers leave it to companies to design their own control displays.

We would think it strange if Boeing sold jetliners with empty cockpits, devoid of instruments logically and consistently arranged for use by the pilot. Imagine if they suggested, “Just get your pilots together and let them design their own panel. After all, they’ll be using it.”

Imagine if your car came with a blank display screen and a manual on how to create lines, colors, and shapes so you could build your own speedometer. While this might actually appeal to the technical audience of this book, the rest of the world would think it strange and unacceptable. And, could you operate your car by using the “Doom” display motif created by your teenage son?

This do-it-yourself approach, without consistent guidance, is the general case with industrial graphics and is a major reason the results are often poorly conceived, inconsistent, and generally of low quality and performance.

Here’s a short quote that will grab your attention.

A graphic optimally designed for running a process and handling abnormal conditions effectively will, in fact, look boring.

Effective monitoring displays don’t “Wow” people with immediate graphical appeal. People often look at my dashboard designs and think “Where are the colors and those cute gauges that I like so much?” Here are a few of the characteristics that are listed as effective for HMI displays:

  • Important information and Key Performance Indicators have embedded trends.
  • There is no gratuitous animation.
  • There is very limited use of color and alarm colors are used only to display alarms and nothing else…Bright, intense (saturated) color is used only for quickly drawing the operator’s attention to abnormal conditions and alarms. If the process is running correctly, the screen should display little to no color.
  • Equipment is depicted in a simple 2-D low-contrast manner, rather than brightly colored 3-D vessels with shadowing.
  • Layout is generally consistent with the operator’s mental model of the process.

This is all that I’ll share as a glimpse into The High Performance HMI Handbook. If you’re responsible for designing effective industrial control system displays, $129.99 is a small price to pay for the useful guidance in this book.

Take care,

What Intelligence Tests Miss

November 24th, 2009

What Intelligence Tests Miss: The Psychology of Rational Thought, Keith E. Stanovich, Yale University Press, 2009.

Many prominent thinkers over the last few years have pointed out that IQ tests fail to test many abilities of the mind that are useful for making our way in the world. Most argue that there are types of intelligence other than what IQ tests measure, such as emotional intelligence. In his insightful book What Intelligence Tests Miss: The Psychology of Rational Thought, Keith Stanovich frames the problem differently. He argues that it is acceptable and even useful to limit the term “intelligence” to the abilities that IQ tests measure, but that in addition to Intelligence Quotient (IQ), a measure of algorithmic thinking, we should also assess Rational Quotient (RQ), a measure of reflective thinking. IQ fails to measure our ability to exercise good judgment and to make good decisions. Smart people often do dumb things. This is because there is almost no correlation between intelligence and our ability to think rationally, that is to avoid thinking errors that lead to poor judgments and the resulting bad decisions that undermine our best interests.

Testing IQ has become thoroughly integrated into American society and values. Stanovich points out that “In our society, what gets measured gets valued.” IQ, either measured directly or through proxy tests such as the SAT, has become the standard that determines academic and professional opportunities, yet it entirely fails to measure people’s ability to think rationally, which is every bit as important. In fact, because rationality has been so thoroughly ignored, we now have an American workforce that is sadly lacking in this critical ability. People throughout organizations, from the lowliest workers to the loftiest leaders, make bad decisions that undermine their interests based on irrational, error-prone assessments of the situations rather than rational consideration of available evidence. In his book Breakdown of Will (2001), George Ainslie describes the situation we find ourselves in today:

The prosperity of modern civilization contrasts more and more sharply with people’s choice of seemingly irrational, perverse behaviors, behaviors that make many individuals unhappier than the poorest hunger/gathers. As our technical skills overcome hunger, cold, disease, and even tedium, the willingness of individuals to defeat their own purposes stands in even sharper contrast.

That we focus so much attention on intelligence and value it so greatly is not the problem; the problem is that we focus on and value it so exclusively. It makes perfect sense to value intelligence, because life in the modern world has become increasingly complex. This is certainly true of business. Consider the world of banking. In the movie “It’s a Wonderful Life,” the local banker who lent money to familiar folks in his community could have managed with less intelligence than the bank executives of today who oversee dozens of departments that each handle a host of intricately complicated financial transactions. But back then and now, a high level of rationality has always been required. Its importance, nevertheless, has become under-appreciated.

Legal scholar Jeffrey Rachlinski points out a problem with the way professionals are trained today:

In most professions, people are trained in the jargon and skill necessary to understand the profession, but are not necessarily given training specifically in making the kind of decisions that members of the profession have to make.

On several occasions, I’ve written and spoken about this problem, especially as it relates to data analysis and presentation. Rather than learning the concepts and skills that are required, we put a software product on employees’ computers and assume that this is all they need. Software vendors have long promoted this line of reasoning in the way that they market their “intuitive,” “self-service” products.

The truth is, we need a full range of cognitive skills to face the challenges of the workplace and of life in general. As a culture, we must embrace rationality by promoting its value and supporting its development and use as thoroughly as we’ve embraced intelligence.

What does Stanovich mean by rationality?

To think rationally means adopting appropriate goals, taking the appropriate action given one’s goals and beliefs, and holding beliefs that are commensurate with available evidence.

Not everything in life requires rationality or even intelligence (that is, what IQ measures). Much of what we do is effectively managed through autonomous mental processes that involve neither. This is great, because it frees up the higher-order processes of cognition, which require conscious attention and greater energy, from being wasted on menial tasks. Walking and even driving are activities that are handled primarily by the autonomous mind. While the autonomous mind does a great job, we get into trouble when we let it handle situations that require higher-order cognition—intelligence and rationality rather than the automatic rules of thumb that the autonomous mind uses to make decisions. One of the important roles of the reflective mind is to interrupt autonomous processing when higher forms of thinking are required. To make better decisions, we need to value and develop the strengths of our reflective minds. Two important rational abilities, especially for data analysis, are the ability to reason logically and the ability to think in terms of probabilities. Unfortunately, relatively few people have been trained in these skills.

Stanovich explains how thinking works based on this tripartite model consisting of the autonomous mind, algorithmic mind, and reflective mind. He talks about the role, importance, strengths and weaknesses of each. He spends a lot of time describing the causes of errors in rational processing (what he calls dysrationalia) and how we can avoid them. And, thankfully, he gives us hope by showing that rational thinking, unlike most intelligence, can be learned. He’s on a mission to make this happen. If you believe in the importance of rationally informed decision making and agree that it’s lacking, I recommend that you read this compelling book.

Take care,

Feature Lists Make Us Comfortable, but Sometimes Make Us Dumb

November 16th, 2009

A few days ago I noticed a blog posted by Boris Evelson of Forrester Research titled “How to Differentiate Advanced Data Visualization Solutions.” Forrester is one of the leading IT research and advice companies. Along with its larger rival, Gartner, these companies serve as trusted advisers to thousands of organizations, helping them make decisions about all aspects of information technology. Although it’s convenient for Chief Information Officers (CIOs) to subscribe to a single service for all the advice they need, is this approach reliable? It depends on whether we actually get advice from someone who has the expertise we’re missing. Far too often when relying on these services, however, we get advice from people whose range of topics is too broad to manage knowledgeably. We sometimes find ourselves being advised by someone who understands less about the topic than we do. If you’re looking for advice about data visualization products, based on what I read in Forrester’s blog, I suggest that you look elsewhere.

Evelson provided a list of features that he believes we should look for when shopping for an advanced data visualization solution. Unfortunately, his list looks as if it was constructed by visiting the websites of several vendors that claim to offer data visualization solutions and then collating the features that they offer. I expect more from a service that people pay good money to for advice. We can’t trust most vendors that sell data visualization software to tell us what we should expect from a good product. It is in their interest to promote the features that they offer, and only those features, whether they’re worthwhile or not. In fact, most vendors that offer so-called data visualization solutions know little about data visualization.

Another problem with Evelson’s advice is that it isn’t clear what he means by “advanced data visualization solutions.” What distinguishes advanced solutions from the others? Of the few features on his list that actually characterize an effective data visualization solution (most of his list misses the mark, as I’ll show in a moment), none go beyond the basic functionality that should exist in every data visualization solution, not just those that are “advanced.”

Evelson has offered the kind of analysis and advice that we get from people who dabble in data visualization, rather than those who have taken the time to develop, not just shallow talking points, but an understanding of what’s really needed and what really works.

Let’s take a look at each feature on Evelson’s list in the order presented and evaluate it’s worth.

Feature #1: “If it’s a thin client does it have Web2.0 RIA (Rich Internet Application) functionality (Flash, Flex, Silverlight, etc)?”

Response: This is a feature that only an IT guy with myopia could appreciate, not someone who actually analyzes and presents data. When evaluating software, we care about functionality and usability, not about the specific technology that delivers it. If we’re exploring and analyzing data via the Web, what matters is that interactions are smooth, efficient, easy, and seamless. How this is accomplished technically doesn’t matter.

Feature #2: “In addition to standard bar, column, line, pie charts, etc how many other chart types does the vendor offer? Some advanced examples include heat maps, bubble charts, funnel graphs, histograms, pareto chats, spider / radar diagrams, and others?”

Response: So it’s the number of chart types that matters? What constitutes a chart type? Do useless chart types count? This is a lot like giving high marks to the software programs with the most lines of programming code, as if that were a measure of quality and usefulness. What matters is that a data visualization solution supports the types of charts that do what we need and that they work really well. Many data visualization products could be dramatically improved by removing many of the silly charts that they offer rather than by adding more to the collection.

Feature #3: “Can the data be visualized via gadgets/widgets like temperature gauges, clocks, meters, street lights, etc?”

Response: Is Evelson serious? Should vendors get points for providing silly, dysfunctional display gadgets? Most of the gauges, clocks, meters, and street lights that many so-called data visualization products provide are worthless. Anyone who understands data visualization knows this to be true. This is what Evelson looks for in “advanced” data visualization solutions?

Feature #4: “Can you mash up your data with geospatial data and perform analysis based on visualisation of maps, routes, architectural layouts, etc?”

Response: While the ability to view and interact with data geo-spatially is critical, most of the “mash-ups” that vendors enable are horribly designed, and thus of little use. Throwing quantitative data onto a Google map doesn’t qualify as effective data visualization. Google maps (and other similar services) were not designed as platforms for quantitative display, but instead as sources for directions (“How do I get from here to there?”). Good geo-spatial data visualization uses maps that are designed to feature quantitative data only within the context of geo-spatial information that adds meaning to the data. What’s also important is that geo-spatial displays can be combined on the screen simultaneously with other forms of data visualization (for example, bar graphs, line graphs, tables, and so on) to provide a fuller view of the data than geography alone.

Feature #5: “Can you have multiple dynamically linked visualization panels? It’s close to impossible to analyze more than 3 dimensions (xyz) on a single panel. So when you need to analyze >3 dimensions you need multiple panels, each with 1-3 dimensions, all dynamically linked so that you can see how changing one affects another.”

Response: This is probably the clearest description on Evelson’s list of a feature that is actually useful and indeed critical. Whether the separate views of the data set appear in separate panels or not isn’t important however. What’s important is the ability visualize the data in multiple ways–that is, from multiple perspectives on the screen at once. Only then can we construct a comprehensive view and spot relationships, which would be impossible if we were forced to examine each view independently, one at a time.

Feature #6: “Animations. Clicking through 100s of time periods to perform time series analysis may be impractical. So can you animate/automate that time period journey / analysis?”

Response: So far, researchers have only found a limited role for animation in data visualization, especially for data analysis. When Hans Rosling of GapMinder uses bubble plots to tell a story, such as the correlation between literacy and fertility throughout the world and how it has changed through time, bubbles (one per country) that move to display change through time work because he is narrating–telling us where to look and what it means. Research has shown, however, that these same animated bubble plots are of limited use for data analysis. We simply cannot watch all those bubbles as they follow their independent trajectories through the plot. To compare the paths that two bubbles have taken through time by means of animation, we must mark the path with trails that provide static representations of the bubbles’ journeys. Too many software vendors are providing animations that are nothing more than cute tricks to entertain, rather than useful visualizations. We should run from any vendor that has actually taken the time to make the pointers on their silly gas gauges wobble back and forth for several seconds until they eventually stop moving and point to the value that we need.

Feature #7:  “3 dimensional charts. Can you have a 3rd dimension, such as a size of a bubble on an XY axis?”

Response: Simply asking a vendor if his products support 3-D displays is the wrong question. 3-D pie charts, bar graphs, and line graphs are almost never useful. Most implementations of 3D in so-called data visualization products are either entirely gratuitous and thus distracting, or far too difficult to read. The example that Evelson gave, however–the ability to add a third quantitative variable to a scatterplot by allowing the data points to vary in size to represent a third quantitative variable–is actually useful, assuming the vendor designs this feature properly. That’s a big assumption.

Feature #8:  “Can you have microcharts (aka trellis) — a two dimensional chart embedded in each row or cell on a grid?”

Response: Evelson is onto something here, but he seems a bit confused about the terms. “Microcharts” is the name of an Excel add-in product from Bonavista Systems. A microchart is a small chart, such as a sparkline or a bullet graph, which conveys rich information in a small amount of space, such as in a single spreadsheet cell. A “trellis” display, what Edward Tufte has been calling “small multiples” for many years, is something quite different. It is a series of charts that breaks a data set into logical subsets, each with the same quantitative scale, arranged within eye span on a single screen or page, for the purpose of making comparisons between the charts. For example, if the correlation between the number of sales contacts and sales revenues for 500 customers and 20 separate products would be too cluttered and complex if displayed in a single scatterplot, we might be able to solve this problem by creating a trellis display of 20 scatterplots, one per product.

Feature #9: “Can you do contextual or gestural (not instrumented, not pushing buttons, or clicking on tabs) manipulation of visualization objects, as in video games or iPhone like interface?”

Response: Evelson might be getting at something useful here, but he hasn’t distinguished the gratuitous video game-like interactions that have become all too common in many so-called data visualization products from useful interactions that are needed to uncover meanings that live in our data, which only a few products actually support. For data exploration and analysis, it’s quite useful to interact with visualizations of data directly to change the nature of the display in pursuit of meaning, such as to sort or filter data. For instance, rather than using a separate control or dialog box to remove outliers in a scatterplot, it’s useful to be able to grab them with the mouse (or with your finger on a touch screen) and simple throw them away.

Feature #10: “Is the data that is being analyzed
a) Pulled on demand from source applications?
b) Stored in an intermediary DBMS
c) Stored in memory? This last one has a distinct advantage of being much more flexible. For example, you can instantaneously reuse element as a fact or a dimension, or you can build aggregates or hierarchies on the fly.”

Response: What really matters is not where the information is stored, but how easily, flexibly, and rapidly we can access and interact with the data that we need. How this is accomplished technically needn’t concern us as long as it works.

Feature #11: “Is there a BAM-like operational monitoring functionality where data can be fed into the visualization in real time?”

Response: When real-time data updates are needed, this is a useful feature, but few data visualization solutions require real-time updates.

Feature #12: “In addition to historical analysis, does visualization incorporate predictive analytics components?”

Response: This is indeed useful, but what many vendors call “predictive analytics” are neither predictive nor analytical. Rather than simply asking vendors if they support predictive analytics (you will never get a “No” answer to this question), we should questions such as: “Can the software be used to build effective predictive models (that is, those that are statistically robust) that allow us to not only determine the probability of particular results under particular conditions, but also to see, understand, and therefore reason about the interactions between variables that contribute to that result?”

Feature #13: “Portal integration. If you have to deliver these visualizations via a portal (SharePoint, etc) do these tools have out of the box portal integration or do you need to customize.”

Response: Generic portal integration isn’t important. If you use a particular portal product and you need the analytics tools to integrate with it, then this specific requirement might be useful to you. This should not, however, be a reason to reject an otherwise effective data visualization solution. There are so few good solutions to choose from today, don’t let someone in your IT department turn away the one that’s useful to you because it doesn’t integrate neatly into your organization’s portal.

At the end of his list of features, Evelson asked, “What did I miss?” I appreciate his openness to suggestions. More than what he missed, however, I’m concerned about the features that he included that are either unimportant or that in some cases actually undermine data visualization.

Fundamentally, Evelson missed the opportunity to assess the effectiveness of data visualization solutions. Lists of features–even good ones–fail to do this. Another fundamental problem is that his list lumps all data visualization solutions together, as if every purpose for which data visualization might be used requires the same functionality. This is far from the truth. Uses of visualization for monitoring, analysis, or communication, although they share much in common, require many distinct features as well. When shopping for data visualization software, you must first know what you plan to accomplish with it and then determine the features that are specifically required for that purpose. Unless you’re planning to use a single tool for all purposes, you won’t need everything that a data visualization solution could possibly offer.

Evelson is but one of many people that organizations erroneously trust for critical advice. Regarding data visualization, he lacks the expertise that’s required and legitimately expected. Anyone who sets himself up as an adviser—especially one that organizations pay for dearly—ought to develop deep expertise in the subject matter. Before we can shop effectively for technology, we must first shop effectively for reliable sources of advice.

Take care,

Malcolm Gladwell, modern problems, and the analytics age

September 21st, 2009

I had the great pleasure last Thursday of hearing Malcolm Gladwell, journalist and author of the books Outliers, Blink, and The Tipping Point, speak at SAS Institute’s Innovators’ Summit in Chicago. I gave one of two keynote presentations in the morning and Gladwell gave the keynote in the afternoon. I believe that Gladwell is one of the great thinkers and communicators of our time, and his words on Thursday afternoon led me to believe this even more fervently.

Gladwell’s topic was well-chosen for a group of people who spend their time making sense of data (mostly statisticians) using SAS’ visual analysis product JMP. He spoke about problem solving and the fact that our problems today are different from those of the recent past. Our former problems were usually solved by digging up and revealing the right information. He used Watergate as an example, pointing out that key information was hidden, and the problem was solved when Washington Post journalists Woodward and Bernstein were finally able to uncover this information that had been concealed. Modern problems, on the other hand, are not the result of missing or hidden information, Gladwell argued, but the result, in a sense, of too much information and the complicated challenge of understanding it. Enron was his primary example. The information about Enron’s practices was not kept secret. In fact, it was published in several years’ worth of financial reports to the SEC, totaling millions of pages. The facts that led to Enron’s rapid implosion were there for anyone who was interested to see, freely available on the Internet, but weren’t understood until a journalist spent two months reading and struggling to make sense of Enron’s earnings, which led him to discover that they existed only as contracts to buy energy at a particular price in the future, not as actual cash in the bank. The problems that we face today, both big ones in society like the current health care debate and smaller ones like strategic business decisions, do not exist because we lack information, but because we don’t understand it. They can be solved only by developing skills and tools to make sense of information that is often complex. In other words, the major obstacle to solving modern problems isn’t the lack of information, solved by acquiring it, but the lack of understanding, solved by analytics.

Gladwell’s insights were music to my ears, because he elegantly articulated something that I and a few others have been arguing for years, but he did so in a way that was better packaged conceptually. Several months ago I wrote in this blog about Richards J. Heuer’s wonderful book Psychology of Intelligence Analysis, and featured his assertion that we don’t need more data, we need the means to make sense of what we have. More directly related to my work in BI, I’ve stated countless times that this industry has done a wonderful job of giving us technologies for collecting and storing enormous quantities of data, but has largely failed to provide the data sense-making tools that are needed to put data to use for decision-making.

The title of my keynote at the Innovators’ Summit this year was “The Analytics Age.” I argued that the pieces have finally come together that are needed to cross the threshold from the Information Age, which has produced great mounds of mostly unused information, to the Analytics Age, when we’ll finally learn how to understand it and use it to make better informed, evidence-based decisions. The pieces that have come together include:

  • plenty of information
  • proven analytical methods (especially statistics enhanced through visualization)
  • effective analytical tools (only a few good ones so far, but this will change)
  • a growing awareness in society that analytics are needed to replace failed decision-making methods, based on whim and bias, that have led us to so much trouble

Although many software vendors claim to sell analytics tools, seeking to exploit the growing awareness that analytics are powerful and necessary, few actually understand this domain. Their products demonstrate this fact as silly imitations of analytical techniques. This is true of every traditional BI software vendor in the market today. As Gladwell pointed out, the paradigm has shifted; the skills and methods that worked in the past can’t solve the problems of today. Only a few software vendors that play in the BI space (none of which represent traditional BI) have the perspective and knowledge that is required to build tools that can help us solve modern problems. Most of these have either evolved from a long-term focus on statistics, such as SAS Institute, or have emerged as spin-offs of academic research in information visualization, such as Tableau and Spotfire. If traditional BI vendors want to support the dawning analytics age, they must retool. They must switch from an engineering-centric worldview, focused primarily on technology, to a design-centric perspective, focused primarily on the human beings who actually work with data. Only then will they be able to build effective analytical tools that take advantage of human visual and cognitive strengths and augment human weaknesses.

Borrowing another insight from Gladwell, I believe we are approaching the “tipping point” when people will no longer be fooled by analytical imitations and will begin to develop the skills and demand the tools that are needed to get the job done. Business intelligence vendors that fail to catch on or to turn their unwieldy ships in time will be left behind. The times are changing and so must they.

If you’re among the minority in the workforce today who understand analytics and are willing to tie your talents to good tools that utilize them fully, you are in for the ride of your life. As Hal Varian, University of California, Berkeley professor and current Chief Economist at Google, recently stated in an interview, “statistician” will become the sexy job of coming years, just as software engineers enjoyed that position for years, beginning in the 1980s. Evidence of this can already be discerned. Even in today’s depressed job market, graduates with degrees in statistics are in extremely high demand and are being rewarded with high salaries. You don’t need a Ph.D. in statistics to be a good data analyst, of course. You must, however, have the soul of an investigator and a flexible, analytical mind. You must be able to think critically. Daniel Pink made this case brilliantly in his book A Whole New Mind (2005). What I’m calling the “analytics age,” he called the “conceptual age.”

Our schools are not geared up to produce this kind of workforce, so if you’ve somehow managed to develop these skills, there’s a place of honor for you in the world that’s emerging. You’ll be appreciated in ways that were rare during those years when I worked in the corporate world. Won’t it be refreshing to actually be thanked when you reveal, through painstaking analysis, faults in your organization’s policies, practices, or assumptions, rather than being ignored or punished? Won’t it be nice to be rewarded when you save your organization millions of dollars by warning against a doomed decision rather than being demoted for speaking a politically unpopular truth? Won’t it feel good to prepare a well-reasoned case for a specific course of action and not have your hard work discarded in the blink of an eye by a manager who says “No, we’ll do it my way, because I’m the boss.” If your heart sings at these prospects, hold your head up and stay true; your day is coming.

Am I dreaming? Can a society and a workplace in which reason and evidence trumps whim and bias really emerge with enough strength to shift the balance? I hope so, but there’s no guarantee. I’m going to do everything I can to help usher it in. The opportunity is now. I don’t want to live in the sad, dumb, unjust society that is our future if this opportunity is missed.

Take care,