|

|
Thanks for taking the time to read my thoughts about Visual Business
Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions
that are either too urgent to wait for a full-blown article or too
limited in length, scope, or development to require the larger venue.
For a selection of articles, white papers, and books, please visit
my library.
|
|
October 15th, 2015
After the completion of my 2012 Dashboard Design Competition, I created my own version of the Student Performance Dashboard based on the same data that the competitors used. Since then, a few individuals and software vendors have asked for a copy of the data so they could reproduce my version of the dashboard using their dashboard-creation tool of choice. Recently, I received such a request from an application developer named Robert Monfera. He wanted to create a functional version of the dashboard using d3, a programming tool for creating rich graphs for the Web. In recent years, d3, created by Michael Bostock of the New York Times when he was a doctoral student at Stanford, has become the preferred tool among graphics designers and developers for creating infographics and web-based analytical applications, when a commercial data visualization product won’t to. Robert wanted to create the Student Performance Dashboard using d3 as a learning exercise. As a reminder, here’s a small section of the dashboard:

I was happy to give Robert the data along with permission to recreate my design, but this evolved into enthusiasm when Robert began to show me what he could do with d3. I quickly realized that he was not your everyday software developer. I invited him to add some sorting functionality that I wanted to demonstrate with a working example of the dashboard and promised to showcase his work when it was ready—and now it is.
When I introduced my version of the Student Performance Dashboard, which appears in the second edition of my book Information Dashboard Design, I acknowledged that the ability to sort the rows of student information in various ways could be useful. However, I suggested that this should be implemented in a way that always automatically reverts back to the original sort order immediately after viewing the data. This is because dashboards will only work for rapid performance monitoring if they present the data in the same manner from day to day, without alteration. Otherwise, people will never learn to use them rapidly because of the disorientation that is caused when anything other than the data changes, including the way in which items are sorted. Ideally, I wanted the interface to allow the viewer to click on a column of data, which would cause the rows to sort based on the variable contained in the column for as along as the mouse button was held down and then revert to the original order as soon as the button was released. Robert was able to implement this functionality as I envisioned, causing the rows to visibly reorder, taking just enough time to do so for the viewer to notice if only a few or many rows needed to be rearranged to exhibit the new order. It works beautifully, which you can see for yourself by interacting with the dashboard.
Even though I wouldn’t ordinarily include brushing and filtering functionality in a performance monitoring dashboard, Robert wanted to see its effects for himself, so he added this functionality as well. When you select one or more rows in the dashboard the summary graphs at the bottom of each row are filtered to reflect the selected students only. To select multiple rows, simply click and drag across the entire set. You can select non-contiguous rows by holding down the Ctrl key and either clicking individual rows or dragging down multiple rows.
I should let Robert speak about his work for himself. Here’s what he wrote to me:
D3.js is a leading tool for data visualization development I’ve used for my customers’ options trade charting, machine learning visualizations and an environmental dashboard. Mike Bostock, who created d3.js, and others provided a wealth of bite-sized examples, including your bullet graph.
Your books and articles on dashboard design match my experience that people are interested in context and detail behind the focus, rather than regressing to sparse presentations for their apparent simplicity and appeal. Many of your articles deal with simplicity vs. complexity, and your teachings lead to complex, information-dense, yet intelligible, tailored information graphics, also rooted in Tufte’s findings, such as this:
If the visual task is contrast, comparison and choice – as so often it is – then the more relevant information within eyespan, the better. Vacant, low-density displays, the dreaded posterization of data spread over pages and pages, require viewers to rely on visual memory – a weak skill…
My learning goals called for an implementation of a data-rich dashboard with d3.js that can be the basis for further, shared experimentation. The best publicly accessible dashboard design resource I know of, your 2012 Dashboard Design Competition, even has multiple realizations, including your take on the challenge, which includes some of your research, e.g. on color and bandlines, which, similar to your bullet graphs, are poised to become widespread. Also, I feel deeply about education via visualization using, as Bret Victor calls it, Magic Ink as an underexplored medium to help people understand and learn.
The experiment involved complex, deeply nested visual elements, following the simplest structures and lightest abstractions – testing d3.js on an exacting, detailed, bespoke design, without involving other structuring tools I’d use for clients. While the program has specific shortcomings, the upshot is that problems and patterns have emerged that wouldn’t have come up with a much smaller task.
A couple of interactions were also added – no claims about their utility – even though a dashboard, as you define it, is not an exploratory data analysis tool. Some of them look interesting: for example, clicking on a column heading and seeing how many (or how few) lines move around, and by how much, is viscerally revealing of correlations with grade score. Also, the aggregate bandline seems informative and engaging in its transitions as the set of rows changes during brushing.
Planned work involves the factoring out of the bandline function for easy reuse by everyone; improving on the data binding abstraction (also in light of d3 v4), proper code structuring and subsequent open sourcing. Interactions on mobile devices are not yet enabled. Fixed coordinates should be replaced by configuration or automatic layout optimization.
Steve, I’m grateful for the permission you gave me for implementing your design and using your data, and our discussions about design. It’s been an educational journey and starting point for further exploration.
To explore Robert’s work on your own, simply click the dashboard image below to access the working version. To contact Robert Monfera, you may reach him via email at monfera.robert at gmail.com.
Take care,

September 30th, 2015
In my recent newsletter article titled “A Course of Study in Analytical Thinking” I included “scientific thinking” as a specific type of thinking that we should understand and practice as data sensemakers. For this particular topic, I recommended the book A Beginner’s Guide to Scientific Method, Fourth Edition, by Stephen S. Carey as a useful introduction, but admitted that I had not yet read the book. I read others on the topic that didn’t suit the need and Carey’s book seemed to be the best bet based on the author’s description and the comments of several satisfied readers. Within a day or two of the article’s publication my copy of the book finally arrived and I’m relieved to say that it’s a perfect fit.

It’s a short book of only 146 pages (including the index), but it covers the topic beautifully. It even includes quizzes and exercises for the dedicated learner. I especially appreciate its thoughtful focus on the essence of science and scientific method, never venturing into territory that non-scientists would find esoteric or intimidating. If you’re like me, you probably assumed that there were many good books of this type available, but this is surprisingly not the case. Given the importance of science and the fact that everyone should understand what it is and how it is essentially performed, this is a tragic void. Thankfully, Carey must have recognized this two decades ago when he wrote the first edition and has continued to serve the ongoing need by updating it every few years with current examples.
Carey breaks the content into six chapters:
- Science (This chapter defines science, describes the methods that are common across all branches of science, and argues for its importance.)
- Observation (This chapter describe the process of effective observation.)
- Explanation (This chapter focuses on the goal of explaining “why things happen as they do in the natural world,” including the special role of hypotheses and theories.)
- Experimentation (This chapter describes the role of experimentation, various types of experiments, and the ways experiments should be designed and conducted to produce reliable findings.)
- Establishing Causal Links (This chapter extends the topic of experimentation by addressing the special techniques, including statistics, that must be used to establish causation.)
- Fallacies in the Name of Science (This chapter draws a clear distinction between science and pseudo-science, including basic tests for distinguishing science from its imitation.)
Unless you’re already trained in the ways of science, you’ll find this book enlightening and enjoyable. It’s quite possible that you’ve already published a research paper in your field of study but somehow never learned what this little book teaches. I’ve read many research papers, especially in my field of information visualization, which had the appearance of science, with technical jargon and lots of statistics (often misapplied), but were in fact pseudo-science because the researchers and their professors did not understand the basic methods of science. So many time-consuming but ultimately worthless projects might have been salvaged had the researchers read this simple little book.
Take care,

P.S. When I wrote this blog post, I’d forgotten how horribly expensive this books is. It lists for almost $100. Even discounted, it will still cost you nearly $80. This is unconscionable. I doubt that it was the author’s decision to price it out of reach. I suspect that this is an example of Wadsworth Publishing’s shortsightedness. They see it as a textbook that only students will purchase – students who will have no choice in the matter. In fact, this book would have a broad audience if it were reasonably priced; so much so that the publisher and author would earn a great deal more money. What a shame! Until this changes, try to find yourself a used copy.
August 31st, 2015
It annoys me when I see poor journalistic infographics, in part, because I value journalism and I hate to see it done ineffectively. Good news organizations take the quality of their journalist’s writing seriously. Journalists and their editors work hard to produce news stories that are accurate, clear, and compelling. Those who can’t write effectively lose their jobs. So, why is it that some of the same publications that take great pains to produce well-written articles don’t bat an eye when they produce infographics that are inaccurate or unnecessarily difficult to understand?
Take the following infographic recently published by Time as an example. The topic is important, “Why We Still Need Women’s Equality Day,” but notice how unnecessarily hard you must work to get the information and how difficult it is to compare the values that it displays and combine them into a sense of the whole.

This infographic provides eight measures of women’s participation in government. Each measure is expressed as a percentage of female vs. male participation. So why is each measure presented graphically in a different way? A single graphical form that makes the percentages of female vs. male participation easy to read and understand for all of the eight measures would work so much better. Also, given the fact that there is value in comparing the eight measures, why does the infographic arrange them vertically in a way that no computer or tablet screen could contain without scrolling? And even if all eight measures could fit on a single screen, because every one is expressed in a different manner, they still couldn’t be quickly and easily compared.
Has anything been gained by displaying the eight measures in these various ways? Some infographic designers would argue that by displaying the measures differently, visual interest has been increased, resulting in greater reader engagement. I suppose that there are people who might actually find this variety of expression engaging, but only in a way that draws them into the pictures independent of their meanings. Is someone who reads this article merely to enjoy the pictures with little concern for the content and its meaning an appropriate audience? Only if the journalist is trying to win a popularity contest among the disinterested.
Here’s the same story told primarily in graphical form, but this time it is clear, simple to read, makes comparisons easy, and brings the measures together is a way that makes the whole available at a glance.
A great deal has been gained through this redesign, but has anything been lost? Nothing other than meaningless, complicating, and distracting variety.
Isn’t it time that we demand of graphical journalists the same standards of effectiveness that we demand of their traditional counterparts? Journalism is journalism. Whether the story is told in words, pictures, or both should be determined by the nature of the information and the integrity of that information should always be respected.
Take care,

August 18th, 2015
Many books have been written in recent years about our brains—how they evolved, how they work, how they often fail us, how they sometimes serve us in a blink with little effort, and how they differ from the brains of other animals. Neuroscientists and psychologists in particular have fascinated us with advances in our understanding of the human brain. This emerging understanding reveals a challenge that we must face and overcome to preserve our species. Our brains evolved primarily to survive as hunter-gatherers on the African savannah, which was straightforward and simple. Not simple in that it was easy, but simple in that the problems that we faced were not difficult to understand. The automatic, intuitive, fast, and emotional types of thinking that made us successful in that world—sometimes called System 1 or Type 1 thinking—served us well for millennia. Something began to happen, however, when changes in our brains led to tool making and social cooperation, which gave birth to a cultural revolution. We went from living in small bands on the African savannah to living in larger groups throughout the world. That revolution, which coincided with expanding cognitive abilities that are deliberate rather than automatic, cognitively demanding rather than intuitive, slow rather than fast, and rational rather than emotional—sometimes called System 2 or Type 2 thinking—not only gave us greater ability to adapt to the world but also an ability to shape the world. The result is a world, unique to humans, that is complex. This complexity of our own making has made our lives challenging. And, because we usually try to deal with this complexity using the System 1 thinking that isn’t equipped to deal with it, we have made mistakes along the way that threaten our very existence, such as the damage to our environment that is a by-product of industrialization and the potential for mass destruction that is made possible through science and the technologies that it has produced. Learning to use System 2 thinking to solve complex problems is the fundamental challenge of our age. In his book, Closing the Mind Gap: Making Smarter Decisions in a Hypercomplex World (2014), Ted Cadsby ties together the threads of understanding that are needed to meet this challenge.

Ted Cadsby is not a neuroscientist or a psychologist. He is a former banking executive with an MBA, seasoned with training in philosophy, who now works as a researcher, writer, and speaker on complexity and decision making. As such, his interest and work in decision making is not just theoretical. He is intimately familiar with the types of problems that we face day to day in our lives and especially in our work. In Closing the Mind Gap, by drawing on the work of many researchers from various disciplines, he’s managed to weave together into a coherent whole an understanding of the gaps that exist between our System 1 thinking abilities and the complex problems that we face, along with the steps that we must take to develop and apply our System 2 thinking abilities to solve these problems.
- In Part I, “Brains Coming into the World,” he describes how our brains evolved and the ways in which our cognitive abilities are bounded.
- In Part II, “Brains Sorting out the World,” he describes how our brains work—how System 1 thinking simplifies and satisfices in ways that served our ancient ancestors well and continue to serve us well when facing easy, day-to-day decisions, but how simplifying and satisficing fail when faced with complexity.
- In Part III, “The Brain-World Problem,” he explains that we live in two worlds: World 1, which is simple and easy to navigate using intuitive System 1 thinking, and World 2, which is complex and therefore beyond the reach of intuition.
- In Part IV, “The Brain-World Solution,” he explains that Systems Theory, the scientific method, and statistical thinking enable “metacognitive thinking”—a form of System 2 thinking—to deal with complexity.
- In Part V, “Brains and People,” he explains how complexity in our selves (psychology) and complexity in others (sociology) complicate the struggles that we face.
This book consolidates and articulates a great deal of content that you could find elsewhere in greater detail, but with much more time and effort. As I was reading it, it felt a little repetitive at times, but that might have been because I’m well read in this subject matter. I recommend this book to anyone who engages in complex decision making or supports those who do. This should be required reading for anyone who is embarking on a career in analytics. Even if you’ve been doing data analysis for years, this book will help you rebuild a foundation for analytical thinking that will make you a better analyst. It’s never too late to strengthen your skills in fundamental ways.
Take care,

August 8th, 2015
Let me begin by explaining the title of this piece. I recently ran across a blog article about something called the “Data Visualization Competency Center (DVCC).” Given the fact that I spend my time helping people develop data visualization competency, this caught my interest. As I read the article, I found myself saying “Yes” to a few of the author’s points, but I had to take a detour when I read that the DVCC was sponsored by a cloud-based business intelligence (BI) service provider named “GoodData.” This company was not on my radar, so I quickly accessed its website to get a sense of its work. It took no more than a minute to realize that GoodData knows little about data visualization and certainly doesn’t support data visualization competency. Many examples of its visualizations were poorly designed. In other words, its visualizations are typical of BI software—typically bad. There should be a rule that you can only include “good” in your name if you do good work. Then again, marketing is all about making claims that are seldom true. The author of the article, Lindy Ryan, does not work for GoodData, but is instead the “Research Director of Data Discovery and Visualization” for a BI consultancy named Radiant Advisors. Ms. Ryan recently wrote a report titled “The Data Visualization Competency Center: Balancing Art, Science and Culture,” which advocates an extension of the “Business Intelligence Competency Center (BICC)” to improve the use of data visualization within businesses. In respect to data visualization, Radiant Advisors fits its name only if by “radiant” you mean “shiny,” as in, “The light reflecting off of that street sign is so shiny I can’t read it.” Snarky comments aside, let me get to the point of this blog piece: most of the people and organizations who proclaim expertise in data visualization are not experts. Even worse, many of them sell snake oil. Don’t be deceived. If you want to learn about data visualization and put it to good use, take care in choosing your advisors.
A little more background will add texture to this story. When I read Ryan’s article and saw that she associated GoodData with the Data Visualization Competency Center, I immediately responded by commenting that GoodData does not support data visualization best practices. After some brief back and forth between us in the blog, she suggested that we take the discussion offline. After a few days, rather than hearing from Ms. Ryan, out of the blue I received an email from the PR agency that represents GoodData requesting that I work with them on a research project. Hmmm…I’ve had this experience before. On more than one occasion after pointing out problems with a product, I’ve soon afterwards received an offer of work from the offending vendor. If a vendor came to me in response to a negative review and genuinely asked for help in improving their products, I would be happy to oblige. I am not, however, open to bribes.
After turning down GoodData’s suspicious offer, I finally heard from Ms. Ryan. She seemed quite nice, but she knows little about data visualization, which she readily admitted. Why then is she a Data Visualization Research Director and why does she advise organizations about their use of data visualization? And, even more fundamentally, how is she qualified to develop a Data Visualization Competency Center? She isn’t. This concerns me because Ms. Ryan isn’t unique. Most of the people who set themselves up as data visualization leaders and advisors are not qualified. It has become business as usual to make fraudulent claims. This won’t change if we fail to expose them.
Take care,

|