|

|
Thanks for taking the time to read my thoughts about Visual Business
Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions
that are either too urgent to wait for a full-blown article or too
limited in length, scope, or development to require the larger venue.
For a selection of articles, white papers, and books, please visit
my library.
|
|
March 9th, 2010
The software vendors that have dominated the business intelligence market for the last 15 years or so have hit the wall and they haven’t a clue how to scale it. They’re stuck because they insist on applying the skills and methods that helped them successfully build data warehouses and production reporting systems to a radically different problem: data sense-making. Their past achievements were grand feats of engineering, solved almost entirely with technology, but data sense-making (also known as, analytics) requires a different approach—one that leads with design, not engineering, and focuses on people, their needs and abilities, not technology. Attempts of big BI companies to open the data repository for exploration have produced some embarrassing tools, and they keep on coming. One of the newest examples is SAP BusinessObjects Explorer.
In an article written for SAP Insider, Jeff Veis, Vice President of Industry Solutions and Strategic Initiatives for SAP BusinessObjects, sets expectations for BusinessObjects Explorer:
When most users think of business intelligence (BI), they think of it in a very traditional sense: predefined reports that can’t account for real-time market fluctuations and that don’t allow business users to truly engage with the information.
SAP BusinessObjects Explorer software changes all that. Using the tool, companies can extend the reach of BI to all business users — not just a small subset of expert data analysts…SAP BusinessObjects Explorer enables deep exploration of vast amounts of data, enabling users to identify, manipulate, and act on insights that a pre-structured, traditional BI tool would be hard pressed to deliver.
(“Transform the Way Your Company Thinks about Business Intelligence”, Jeff Veis, SAP Insider, Jan-Feb-Mar 2010)
In the same issue of SAP Insider, Jonathan D. Becher, Senior Vice President of Marketing, answered the question “Can you contrast SAP BusinessObjects Explorer to the business intelligence (BI) tools our readers now have in place?” as follows:
SAP BusinessObjects Explorer is about data exploration, not report generation.
The second big differentiator is accessibility. Everybody who needs access to your company’s business data can use SAP BusinessObjects Explorer. While many of your readers work for companies that are now running BI solutions, most of the employees who need access to business data can’t use the tools. Mastering their requirements and interfaces just isn’t practical, so the BI tools remain the exclusive purview of a relatively scarce number of power users, analysts, and IT department members, and the broader business user community has to go through one of these intermediaries to get their questions answered.
It’s analogous to the way people made phone calls a few generations back. To place a call, you would pick up the receiver and wait for an operator to get on the line and ask to whom you’d like to speak. If that person was in a different part of the country, a series of local operators, each covering specific regions of the country, worked to facilitate the connection. There was nothing self-service about it. In a very real sense, this is the way BI — and frankly, decision making — works today.
SAP BusinessObjects Explorer changes this. It is so easy to use, it democratizes access to data.
(“Better Answers through Better Questions”, Jonathan D. Becher, SAP Insider, Jan-Feb-Mar 2010)
This is roughly the same explanation that big BI companies have been giving for the last 15 years. Terms like “self-service BI” and the “democratization of data” have been used in association with every new product that they’ve introduced since the day that the term “business intelligence” was coined. Obviously, however, none of their past products have achieved this, which is why they keep coming out with new ones to cure the ills caused by every unnecessarily complicated under-performing product that they’ve delivered in the past. But if the previous generation of products didn’t achieve these goals, why should we believe that BusinessObjects Explorer will?
Let’s look at an example of how Becher thinks this new tool will operate in the workplace.
It’s pretty common for participants to show up at planning meetings with their go-to PowerPoint presentation, replete with their favorite metrics about what strategic concerns the business faces. And it’s extraordinarily common for these metrics not to match up — at all. Consider a simple question: How many new customers did we acquire last quarter?
The operations organization posits that people who bought and then returned products do not constitute “new customers.”
Metrics from the head of marketing, who views returns as a quality issue, not a sales issue, do count customers who bought and subsequently returned products last quarter as “new customers.”
The Large Enterprise Sales organization recognizes “new customers” as only those with orders in excess of US$10,000.
Given that the purpose of the meeting is to devise or refine plans, do you really want to lose another planning cycle sending participants off in pursuit of a new definition of the term “new customer,” asking them to regenerate their figures? With SAP BusinessObjects Explorer, this could be done in real time, with all stakeholders looking at the same data.
So, what this new product will finally put within our reach is the earth-shattering ability to get an answer to the question “How many new customers did we acquire last quarter?” without having to involve the IT department. Be still my heart; it’s all a-flutter. A few paragraphs later in the same article, still referring to this data sense-making miracle, Becher states: “If SAP BusinessObjects Explorer sounds revolutionary, it’s because it is revolutionary.” [Long dumbfounded silence] Huh?!!!
Lest we be accused of missing the real miracle here, let’s take into account Becher’s claim that this new tool will eliminate the confusion and roadblocks to consensus caused by the fact that the Operations, Marketing, and Sales departments each define new customers differently, and therefore come up with different new customer counts. This must be some magical tool if it somehow puts everyone on the same page, despite their different perspectives. If this sounds to you like marketing smoke and mirrors, you’re getting the picture.
And it gets better.
Take the example one step further. Let’s say that I am the head of the Large Enterprise Sales organization, and I want to compare sales in select regions of the country. I throw in a few other requirements, and we’re no longer dealing with a standard query — so I have to enlist the help of an analyst. The analyst needs certain warehouse statistics, but finds the right data isn’t loaded, so a call goes out to a data architect, who in turn enlists the help of others to cleanse and load the data. Eventually, I get the report. And 99 times out of 100, the experience ends with something like this: “Oh! That’s not the question I meant to ask. I meant to specify New York City, not New York state, and I actually needed to account for sales that took place in the wake of a new promotional campaign.”
With BusinessObjects Explorer, according to Becher, problems like these will go away. How? Through a new interface that will allow you to ask questions of your data similar to the way you search the Web with Google today. Anyone who understands BI, however, knows that no interface, no matter how magical, will give you access to data that isn’t available, will clean data that is dirty, or will simplify the navigation of complicated operational databases. These improvements are accomplished by a whole lot of hard work on the back end (probably done by someone in IT, because only they have access) to prepare the data for use.
Enough of these same old hollow claims by the big BI vendors that have been frustrating and angering users for years. Are we going to let them continue to raise our hopes and dash them forever, never going elsewhere for answers?
Let’s forget what SAP BusinessObjects is saying about Explorer and take an honest, objective look at it ourselves.
Caution
Don’t mistake what I’ve written as a case against Big BI in favor of Small BI. It is entirely possible for large BI vendors to provide effective tools for data sense-making. To do this, they need to switch from a technology-centric engineering-focused approach to a human-centric design-focus approach, and base their efforts on a deep understanding of data sense-making. Most of the small BI vendors have done no better in cracking this nut than the big guys. They might be more agile due to their small size and thus able to bring a new product to market more quickly, but when they approach the problem in the same dysfunctional way as the big guys, they fail just as miserably. Just like politicians who sell themselves as “not like the guys in Washington,” new players in the BI space often point to the failures of the big guys and then go on to do exactly the same. I am not making a case of small vs. big, but of clear-headed, informed, and effective vs. an old paradigm that doesn’t work for the challenges of data sense-making.
Review of BusinessObjects Explorer
As quoted above, Jeff Veis claims that “SAP BusinessObjects Explorer enables deep exploration of vast amounts of data, enabling users to identify, manipulate, and act on insights that a pre-structured, traditional BI tool would be hard pressed to deliver.” To test these claims, I asked Bryan Pierce who works with me here at Perceptual Edge to access an evaluation copy of the tool on SAP’s website and put it through its paces. The following are Bryan’s findings.
Basically, I didn’t really find anything good about SAP BusinessObjects Explorer. If it was all you had, you could use it to perform some analysis, and it might be a little easier for certain types of exploratory analysis than a tool like Excel, but compared to other tools that are actually designed for exploratory analysis, it’s a joke. Here is an example of the BusinessObjects Explorer interface:

1) Perhaps the single biggest problem with SAP BusinessObjects Explorer is that it only allows you to view one graph at a time. In addition to this, it only allows a maximum of three measures in a graph, so you can only have three lines in a line graph and three segments in a stacked bar graph. There is a notable—but not useful—exception to the single graph rule. If you select two or three measures and choose a pie chart or a radar chart it will create two or three graphs next to each other, although I couldn’t find a way to make them share a quantitative scale (this would only be applicable for radar charts). Unfortunately, I couldn’t find any way to get multiple versions of any useful graph types.
2) Although you can view up to three measures at once, there’s even less functionality when viewing categorical variables. For instance, the dataset I analyzed had three years’ worth of quarterly data. Using a line graph, I could view the values for the three years and I could look at a particular year and see the quarterly values, but I couldn’t find a way to view all three years at a quarterly level simultaneously (either by using a single line that spanned twelve quarters or by using three lines that each spanned four quarters). Similarly, while I could get two lines or stacked-bar segments for Profit and Expenses (both measures), I couldn’t find a way to get separate lines or stacked-bar segments for States or Cities.
3) In attempting to determine why I couldn’t get a line graph to display quarterly sales for more than one year at a time, I uploaded a custom dataset of time-series data. It appears that BusinessObjects Explorer handles time-series data very poorly. The dataset I uploaded contained daily sales data for two products over 90 days. When I first opened it, this is how the date variable was displayed
As you can see, the variable has been sorted by the total sales for each date, rather than the dates themselves. To fix this, I clicked the little down arrow in the top-right corner of the image and told it to re-sort by date. This is what appeared:
For some reason, every date except for one of them disappeared; I had to close and reopen the dataset to get the other dates to reappear. So, apparently there’s something buggy with the way BusinessObjects Explorer handles dates. In fact, the only way I could get time-series data to work correctly was when I separated years, months, and days into different variables. If you do this, just make sure that you format the months as numbers, because that’s the only way they’ll sort correctly. Unless, of course, you like alphabetical time:
4) With BusinessObjects Explorer, you’re not able to customize the appearance of graphs in any way. This isn’t as important as it would be for a data presentation tool, like Excel, but even an analysis tool should let you do things like disable the gridlines or remove the data points from a line graph.
5) Like most web-based analysis tools, it responds too slowly for seamless interaction. After a filter is applied there is short delay while the application contacts the server for the new data. This delay is only about one or two seconds long, but that’s still more than enough to hamper an analyst’s train of thought.
6) There were several times when I switched over to time-based views where the graphs weren’t sorted in chronological order. For example, I was viewing a bar graph that showed Margin and Quantity Sold by State, which was sorted by the Margin values. I then switched from viewing the graph by State to viewing it by Quarter. The sort by Margin was still in effect so the graph displayed the bars in this order: Q4, Q2, Q1, Q3. It’s one thing to allow people to arrange time-series information in non-chronological order for those extremely rare cases when that might be useful. It’s quite another thing to allow time-series data to be arranged in this way by the software.
7) The program includes all the standard graph types and a few unexpected ones, such as treemaps (although, how useful is a treemap when it only takes up about 1/3 of your screen space?), but it doesn’t include box plots.
8) Speaking of treemaps, they should be used to navigate hierarchical data, for instance, to view sales and margin data at the regional, state, and city levels. However, the treemaps in BusinessObjects Explorer appear to only allow a single level of hierarchy. Here is a treemap that is displaying Sales and Margin by State:

In a functional treemap, I should be able to display each city’s data as a smaller square within the corresponding state square and, potentially, to select a particular city and drill into it to see even more detailed data (such as sales by individual stores). Unfortunately, I could find no way to do any of this. As a side note, red and green are the worst two colors to use to encode data (in this case they’re encoding margin values from highest to lowest), because most people who are colorblind (10% of males and 1% of females) can’t distinguish between the two colors. I would have switched to more suitable colors, but, as I mentioned before, I couldn’t find a way to modify the appearance of any of the graphs.
9) The user doesn’t have enough control of the layout of the display. When viewing both the filter controls and the visualization, the filter controls take up as much space as the visualization. You can hide the filter controls, which gives the visualization more space, but there’s no way to just reduce the filter controls’ size. In addition to using too much space, the filter controls make poor use of the space they require. The filters were designed so that each filter column has the same width. As a result, there are some filters that contain large amounts of wasted space, like the Quarter filter below:
If the filters had been designed to size themselves based on their contents, more filters could have fit on the screen at the same time, which would make filtering more efficient.
10) The shape of the graph is also a problem sometimes. Because the plot area is about four times wider than it is tall, it makes certain types of graphs awkward to read, such as scatterplots (which should usually be roughly square in shape) or vertical bar graphs that only have a few bars (in which case, the bars might be wider than they are tall).
11) In the filter section, totals are displayed next to each categorical value. For instance, here is the quarter variable:
As you can see, the totals aren’t all written to the same precision. The Q1, Q3, and Q4 totals are written to the tenth of a dollar, while the Q2 total is written to the whole dollar. This means the decimal points don’t line up, which makes the numbers harder to read.
These are the problems that Bryan found while spending two hours with the product. A deeper look would no doubt produce a longer list, but Bryan was only trying to spot the big problems that most severely undermine the products use for data exploration and analysis. Had we taken the time to compare BusinessObjects Explorer to one of the good data exploration and analysis products that are available today, such as Tableau or Spotfire, the claim by BusinessObjects that Explorer is “revolutionary” would be exposed more clearly for what it is: a sad statement about this Big BI company’s understanding of data sense-making. BusinessObjects is struggling to catch up with human-centered, design-focused companies like Tableau and Spotfire, which are running circles around them, and making them look pathetic. SAP BusinessObjects and most other Big BI companies haven’t taken the time to understand data sense-making in general, data visualization in particular, or even the real needs of their customers. They need a new mindset, but learning to see the world with new eyes is hard. By the time they figure this out and make the shift, will it be too late?
Take care,

February 22nd, 2010
A new book about information graphics was published last month titled The Wall Street Journal Guide to Information Graphics, by Dona M. Wong, the graphics director for this respected newspaper. I get excited whenever a new book about data visualization is published, especially one that teaches practical techniques, because too few of us are working in this field. This new addition to my library has its merits, but unfortunately it has its problems as well.
To begin, this book is not what its advertising claims it to be. Rather than “the definitive guide to the graphic presentation of information” and “an invaluable reference work for students and professionals in all fields,” which the dust cover claims, it would be more accurately described as a graphical style guide for financial journalism. I suspect that the content of this book was in fact written by Wong originally as the graphics style guide that is used internally at The Wall Street Journal, and that the newspaper envisioned a new source of revenue by revising it slightly and publishing it as a book. There’s certainly nothing wrong with that, but they should have more clearly described its scope as restricted primarily to the interests of financial journalism.
The quality of this book that will no doubt appeal to many potential readers is, in my opinion, its fundamental failure: it includes relatively few words. Unlike her mentor, Edward Tufte, who uses words liberally and eloquently, Wong’s style of writing is closer to the bullet point approach that Tufte disdains. In this respect, it is different from my books, which have at times been criticized for having too many words. A few readers have remarked that I don’t follow my own principle of simplicity in my books because I use too many words to present the material. What they don’t appreciate is the important difference between simplicity and over-simplification. I provide the context that people need to understand what I teach. When you tell people what they should and shouldn’t do without explaining why, they can at best learn only superficially. To learn deeply, people must understand things at a conceptual level-why things work as they do. This requires more than a few words. Wong’s book has too few. In total, the book includes 120 pages of actual content, which consists mostly of figures. The fact that so many figures exist is not the problem; it is in failing to explain her recommendations that she errs. She says “Do this and don’t do that,” but rarely helps her readers understand why. One problem with this is that Wong isn’t always right, but people who are learning about information graphics for the first time won’t realize this.
Wong states a few rules that entirely miss the mark, but more often she emphatically states what are at best rules of thumb, which must allow many exceptions. While reading the book, I found myself frequently writing comments in the margins such as “it all depends” and even “not true.” To give you a sense of this, here are a few excerpts from the book, followed by my margin comments:
Wong’s Words |
|
My Margin Comments |
|
|
|
“Do not plot more than four lines on a simple [line] chart.” (p. 54) |
|
Rule of thumb with many exceptions. Depending on the nature of the data (for example, how close the lines are in value and how much variability in values exists along the lines), a graph could contain many more than four lines and still work quite well. Also, when line graphs are used, not for comparing individual lines, but to provide an overview in a way that features exceptions and predominant patterns, far more than four lines can be included. |
|
|
|
“Don’t use different colors or colors on the opposite side of the color wheel in a multiple-bar chart.” (p. 40) |
|
It depends. Different hues work best for differentiating items, which is what’s usually needed in line graphs with multiple lines, bar graphs with multiple sets of bars, and so on. |
|
|
|
|
|
|
“Choose the y-axis scale so that the height of the fever line occupies roughly two-thirds of the chart area.” (p. 51) |
|
Ineffective rule. I think what Wong’s trying to do is bank the line to 45° so it’s not so flat that the trend and pattern can’t be seen, but this approach won’t guarantee this result. Setting the y-axis scale to begin just a little below the lowest value and end just a little above the highest value makes better use of the plot area. Once this is done, the aspect ratio of the graph (the ratio of its width to its height) can be adjusted to prevent the slope of the line from being either too shallow or too steep. |
|
|
|
“A segmented bar chart in general is more effective than a pie chart at showing proportions of a whole.” (p. 79) |
|
Not true. Actually, for showing a single part-to-whole relationship, a segmented (a.k.a., stacked) bar is never more effective than a pie chart, and in my opinion, neither works as well as a standard bar graph. |
|
|
|
“Always label the value of a vertical bar if it is close to zero.” |
|
It depends on how the graph is used. Labeling these values is only useful when people need precise values, and why would this rule apply to vertical bars and not to horizontal bars? |
|
|
|
When it is appropriate to use different color intensities to differentiate series of bars in a bar graph, Wong states: “The shading of the bars should move from the lightest to the darkest for easy comparison.” (p. 67) |
|
Huh? Why not ever from the darkest to the lightest bars? |
|
|
|
“When plotting horizontal bars over time, the bars should be ordered from the most recent data point [at the bottom] and go back in time [proceeding upward].” (p. 71) |
|
Don’t do this. I recommend that horizontal bars never be used for time-series data, because it is much more natural for people to think of time as proceeding horizontally from left to right. |
|
|
|
This is just a sample of the problems that I noted. Another point on which Wong and I definitely disagree has to do with her recommendations for making the quantitative scales of multiple line graphs different in an effort to make them more comparable, which she addresses in four different sections of the book. In one instance, she wants to make sure that people don’t miss the fact that the following two stocks increased at much different rates, which might occur if they were shown the following graph.
Her solution is to show the following graph instead.
Although I share Wong’s concern, her solution is misleading. To feature the differences in percentage change, the same percentage scale could be used for both graphs, as shown below.
The best solution, however, unless the differences in the magnitudes of change really don’t matter, would be to tell a richer story by presenting the following collection of graphs.
Given the fact that Wong studied under Tufte’s supervision at Yale, I expected to find little with which I would disagree. I was surprised to discover otherwise. Despite our disagreements, I agree with most of Wong’s suggestions, but in almost all such cases she restates what I and others have said before. If you’re already an expert in data visualization, you’ll learn little from this book, except a few techniques that are specific to financial journalism. If you’re a novice hoping to learn the fundamentals of information graphics, be warned that this book advocates a few bad practices along with the good, and it rarely explains the concepts that you must understand to produce effective graphs on your own.
Take care,

February 10th, 2010
This blog entry was written by Bryan Pierce of Perceptual Edge.
The chances are good that you’ve seen network visualizations before, such as the one below in which the circles and octagons represent large U.S. companies and each connecting line represents a person who sits on the board of both companies.
(This image was created by Toby Segaran: http://blog.kiwitobes.com/?p=57)
While these types of graphs have become more common in recent years, there’s still a good chance that you’ve never created one yourself. This is because, traditionally, to create network visualizations, you’ve either needed specialized (and often unwieldy) network visualization software or a full-featured (and usually expensive) visualization suite. That’s no longer the case. A team of contributors from several universities and research groups, including the University of Maryland and Microsoft Research, recently released NodeXL, a free add-in for Excel that allows you to create and analyze network visualizations.
Using NodeXL you can import data from a variety of file formats and it will automatically lay out the visualization for you, using one of twelve built-in layout algorithms. For instance, here’s one with a circular layout:
Below, the same dataset is laid out using the Harel-Koren Fast Multiscale algorithm, which is one of NodeXL’s two force-directed algorithms. Force-directed algorithms are designed to make all the lines (a.k.a. “edges”) about the same length and to minimize line crossings, which can make for a more aesthetically pleasing and readable graph.
You can also manually select and position the data points (a.k.a. “vertexes” or “nodes”). Here I’ve selected a group of nodes, which are highlighted in red, and dragged them away from the rest of the graph.
Once your information has been laid out, you can start exploring and making sense of it. One useful feature of NodeXL is its implementation of dynamic filters, which is something Excel has been sorely lacking for years. For instance, the graph below shows U.S. Senators in 2007, the connecting lines represent two senators who have voted the same way at least 65% of the time, and the color of each circle represents the senator’s political party (blue for Democrat, red for Republican, and yellow for Independent).
If we want to change how the information is filtered we can simply open the Dynamic Filters dialog box and apply or modify the filters. For instance, here I’ve used the slider below to modify the filter so it only displays connections between senators who have voted the same way at least 95% of the time:
Now we’re down to just a few lines and can see that significantly more Democrats voted the same way at least 95% of the time compared to their Republican counterparts:
NodeXL also supports zooming, panning, scaling, and the ability to automatically or manually create clusters of similar data. Below are a couple examples from the NodeXL website to give you a taste of the visualizations that can be created with it.
NodeXL is currently in beta release, so you might find a few remaining bugs here and there, but if you think network visualizations might be useful for your work, NodeXL provides a great way to get started.
-Bryan
January 20th, 2010
Even a brilliantly designed dashboard can be met with disapproval by those who will use it if we’re not careful to introduce it in a way that encourages them to focus on what matters. Designs that are effective for monitoring information are quite different from the designs that are usually featured by software vendors and thus emulated by those who use their products. As a result, what people expect of a dashboard’s design is often quite different from what they actually need.
I’ve been asked on several occasions to provide guidelines for dashboard designers to use when introducing a new dashboard. These requests have encouraged me to create a list of questions that the users of the proposed dashboard can be asked to help them assess the merits of its design. I’m sure that this list of questions that I’ve put together in the last few days can be improved with your help, so I’d appreciate it if you would review the following and suggest anything that comes to mind that might improve it. Please keep in mind that I define “dashboard” in a particular way. Here’s my definition:
A dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance.
(Information Dashboard Design, Stephen Few, O’Reilly Media, 2006)
The key to this definition is the fact that a dashboard is used for monitoring purposes. Its effectiveness should be judged on the basis of its ability to help people monitor what’s going on-that is, to maintain situation awareness.
When asking people to assess the merits of a new dashboard, it usually works best to focus their attention first on the big picture-the dashboard as a whole-and to then drill into the details of each section.
The Dashboard as a Whole
- When you first look at the dashboard, where are your eyes drawn? Are your eyes drawn most to the items that deserve the most attention?
- Can you easily discern how information is organized on the dashboard (for instance, the different sections)?
- Can you easily spot the items that require the most attention?
- Does the dashboard draw your attention to the information rather than to other things that don’t actually convey information?
- Is the information that you consider most important featured prominently on the dashboard?
- Can you quickly scan the dashboard to get an understanding of what’s going on?
- Can you tell the date/time through which the data is effective (for example, as of the end of yesterday or as of five minutes ago)?
- Can you easily compare items and see relationships between items in all cases when that is useful?
- If it works best to get the information in a particular sequence, does the design encourage you to view it in this way and make it easy to do so?
- Does the dashboard provide everything you need to maintain overall situation awareness (the big picture of what’s going on)?
- Can you see everything that you need to construct an overview of what’s going on without having to scroll or change screens?
- Is there anything on the dashboard that you don’t understand? Do you find anything confusing?
Specific Parts of the Dashboard
- Does the way that each measure is displayed express the information in a way that directly supports your needs without having to do conversions or calculations in your head? This could involve something as simple as graphing the variance between expenses and budget directly, rather than making you compare two lines on a single graph.
- Can you rapidly (1) discern the value of each measure, (2) determine whether it is good, bad, or otherwise, and (3) compare it to something that allows you to judge the level of performance?
- Do you have enough information about each item to determine if you must respond in some way?
- If you need to respond to something, can you easily get to any additional information that is needed to determine how to respond?
- Can you perceive each measure as precisely as you need to without being forced to wade through more precision than you need?
- For each measure, can you tell if performance is improving, getting worse, or holding steady? For those measures that lack trend information, would the dashboard be more useful if it were shown?
Take care,

January 14th, 2010
Despite the importance of analytical methods, at times data sensemaking leads to better decisions when we go with our guts. During the last few years enough books to fill a small library have been written about the importance of reflective, analytical thinking, alerting us to errors of less evolved thinking that we so easily slip into. I welcomed these books, read several, and even reviewed a few, but their popularity threatens to tip the balance too far in favor of analytical thinking. A yin and yang balance should exist between the reflective, rational techniques of analysis and the intuitive approach of experts, based on tacit knowledge that has been constructed through time and experience. Tacit knowledge is what enables experts to recognize patterns that others miss entirely. Mental models that experts construct to understand how things work are a form of tacit knowledge. Experts can often size up a situation and know how to respond long before they can articulate their reasons. Although we can get into trouble by trusting our guts, on occasions they serve us better than careful reflection. This is especially true when we deal with problems that dwell in the shadows-those that are murky and complex.
This is the topic of a new book by Gary Klein, who is one of the finest and most insightful observers of human decision-making behavior in the world today. I first became familiar with Klein’s work when I read and reviewed his book Sources of Power: How People Make Decisions back in 2007. His new book is Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making.
Klein uses the difference between daylight vision and night vision as a metaphor to distinguish reflective analysis versus intuition born of expertise:
Experienced-based thinking is different from analysis-based thinking. The two aren’t opposed to each other; they are complementary, like daytime vision and night vision. Experience-based thinking isn’t the absence of analysis. It’s the application of all that we have encountered and learned.
This book is provocative in the way that it begins by listing 10 common beliefs about good decision making and then proceeds to tear down and replace them one by one. When you read this list, which follows, you might be surprised that anyone would question the merits of these popular beliefs.
- Teaching people procedures helps them perform tasks more skillfully.
- Decision biases distort our thinking.
- To make a decision, generate several options and compare them to pick the best one.
- We can reduce uncertainly by gathering more information.
- It’s bad to jump to conclusions-wait to see all the evidence.
- To get people to learn, give them feedback on the consequences of their actions.
- To make sense of a situation, we draw inferences from the data.
- The starting point for any project is to get a clear description of the goal.
- Our plans will succeed more often if we ID the biggest risks and find ways to eliminate them.
- Leaders can create common ground by assigning roles and setting ground rules in advance.
You should read this book, so I won’t steal its thunder by revealing too much. I do, however, want to give you a feel for its contents. Here are three excepts to whet your appetite:
In complex and ambiguous situations, there is no substitute for experience…We put too much emphasis on reducing errors and not enough on building expertise.
A number of studies have shown that procedures help people handle typical tasks, but people do best in novel situations when they understand the system they need to control. People taught to understand the system develop richer mental models than people taught to follow procedures.
Smart technologies can make us stupid…What I am criticizing is decision-support systems that rely on shaky claims and misguided beliefs about how people think and what they need.
If you’re familiar with my work, you’ll notice that Klein and I both have a love-hate relationship with technology. When properly designed, decision-support technologies can serve as tools that help us think, but they can’t think for us. Relying on them too heavily and for the wrong things are common mistakes today. To use information technology effectively, we must know what computers do well and what people do well, and seamlessly interweave the strengths of both.
Take the time to read Streetlights and Shadows. It is filled with important and at times surprising insights and Klein’s prose and many stories are just plain fun to read.
Take care,

Comments Off on Sensemaking in a World of Shadows
|