Thanks for taking the time to read my thoughts about Visual Business
Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions
that are either too urgent to wait for a full-blown article or too
limited in length, scope, or development to require the larger venue.
For a selection of articles, white papers, and books, please visit
August 18th, 2015
Many books have been written in recent years about our brains—how they evolved, how they work, how they often fail us, how they sometimes serve us in a blink with little effort, and how they differ from the brains of other animals. Neuroscientists and psychologists in particular have fascinated us with advances in our understanding of the human brain. This emerging understanding reveals a challenge that we must face and overcome to preserve our species. Our brains evolved primarily to survive as hunter-gatherers on the African savannah, which was straightforward and simple. Not simple in that it was easy, but simple in that the problems that we faced were not difficult to understand. The automatic, intuitive, fast, and emotional types of thinking that made us successful in that world—sometimes called System 1 or Type 1 thinking—served us well for millennia. Something began to happen, however, when changes in our brains led to tool making and social cooperation, which gave birth to a cultural revolution. We went from living in small bands on the African savannah to living in larger groups throughout the world. That revolution, which coincided with expanding cognitive abilities that are deliberate rather than automatic, cognitively demanding rather than intuitive, slow rather than fast, and rational rather than emotional—sometimes called System 2 or Type 2 thinking—not only gave us greater ability to adapt to the world but also an ability to shape the world. The result is a world, unique to humans, that is complex. This complexity of our own making has made our lives challenging. And, because we usually try to deal with this complexity using the System 1 thinking that isn’t equipped to deal with it, we have made mistakes along the way that threaten our very existence, such as the damage to our environment that is a by-product of industrialization and the potential for mass destruction that is made possible through science and the technologies that it has produced. Learning to use System 2 thinking to solve complex problems is the fundamental challenge of our age. In his book, Closing the Mind Gap: Making Smarter Decisions in a Hypercomplex World (2014), Ted Cadsby ties together the threads of understanding that are needed to meet this challenge.
Ted Cadsby is not a neuroscientist or a psychologist. He is a former banking executive with an MBA, seasoned with training in philosophy, who now works as a researcher, writer, and speaker on complexity and decision making. As such, his interest and work in decision making is not just theoretical. He is intimately familiar with the types of problems that we face day to day in our lives and especially in our work. In Closing the Mind Gap, by drawing on the work of many researchers from various disciplines, he’s managed to weave together into a coherent whole an understanding of the gaps that exist between our System 1 thinking abilities and the complex problems that we face, along with the steps that we must take to develop and apply our System 2 thinking abilities to solve these problems.
- In Part I, “Brains Coming into the World,” he describes how our brains evolved and the ways in which our cognitive abilities are bounded.
- In Part II, “Brains Sorting out the World,” he describes how our brains work—how System 1 thinking simplifies and satisfices in ways that served our ancient ancestors well and continue to serve us well when facing easy, day-to-day decisions, but how simplifying and satisficing fail when faced with complexity.
- In Part III, “The Brain-World Problem,” he explains that we live in two worlds: World 1, which is simple and easy to navigate using intuitive System 1 thinking, and World 2, which is complex and therefore beyond the reach of intuition.
- In Part IV, “The Brain-World Solution,” he explains that Systems Theory, the scientific method, and statistical thinking enable “metacognitive thinking”—a form of System 2 thinking—to deal with complexity.
- In Part V, “Brains and People,” he explains how complexity in our selves (psychology) and complexity in others (sociology) complicate the struggles that we face.
This book consolidates and articulates a great deal of content that you could find elsewhere in greater detail, but with much more time and effort. As I was reading it, it felt a little repetitive at times, but that might have been because I’m well read in this subject matter. I recommend this book to anyone who engages in complex decision making or supports those who do. This should be required reading for anyone who is embarking on a career in analytics. Even if you’ve been doing data analysis for years, this book will help you rebuild a foundation for analytical thinking that will make you a better analyst. It’s never too late to strengthen your skills in fundamental ways.
August 14th, 2015
This blog entry was written by Bryan Pierce of Perceptual Edge.
Registration is now open for the Stephen Few’s two advanced U.S. workshops. If you’ve already read Stephen’s introductory books or attended his introductory courses and are looking for a way to improve your dashboard design or visual data analysis skills further, these workshops provide a a great way to do so:
Each of these workshops will only be taught once in the U.S. in 2016. Signal will also be taught in London on Mar. 9-10, Stockholm on Oct. 10-11, and Melbourne on Nov. 7-8, though registration is not open for these workshops yet. In addition to the U.S. workshop, Advanced Dashboard Design will also be offered in the Netherlands on May 23-25 and Melbourne on Nov. 9-11, but registration isn’t open for these workshops yet, either.
August 8th, 2015
Let me begin by explaining the title of this piece. I recently ran across a blog article about something called the “Data Visualization Competency Center (DVCC).” Given the fact that I spend my time helping people develop data visualization competency, this caught my interest. As I read the article, I found myself saying “Yes” to a few of the author’s points, but I had to take a detour when I read that the DVCC was sponsored by a cloud-based business intelligence (BI) service provider named “GoodData.” This company was not on my radar, so I quickly accessed its website to get a sense of its work. It took no more than a minute to realize that GoodData knows little about data visualization and certainly doesn’t support data visualization competency. Many examples of its visualizations were poorly designed. In other words, its visualizations are typical of BI software—typically bad. There should be a rule that you can only include “good” in your name if you do good work. Then again, marketing is all about making claims that are seldom true. The author of the article, Lindy Ryan, does not work for GoodData, but is instead the “Research Director of Data Discovery and Visualization” for a BI consultancy named Radiant Advisors. Ms. Ryan recently wrote a report titled “The Data Visualization Competency Center: Balancing Art, Science and Culture,” which advocates an extension of the “Business Intelligence Competency Center (BICC)” to improve the use of data visualization within businesses. In respect to data visualization, Radiant Advisors fits its name only if by “radiant” you mean “shiny,” as in, “The light reflecting off of that street sign is so shiny I can’t read it.” Snarky comments aside, let me get to the point of this blog piece: most of the people and organizations who proclaim expertise in data visualization are not experts. Even worse, many of them sell snake oil. Don’t be deceived. If you want to learn about data visualization and put it to good use, take care in choosing your advisors.
A little more background will add texture to this story. When I read Ryan’s article and saw that she associated GoodData with the Data Visualization Competency Center, I immediately responded by commenting that GoodData does not support data visualization best practices. After some brief back and forth between us in the blog, she suggested that we take the discussion offline. After a few days, rather than hearing from Ms. Ryan, out of the blue I received an email from the PR agency that represents GoodData requesting that I work with them on a research project. Hmmm…I’ve had this experience before. On more than one occasion after pointing out problems with a product, I’ve soon afterwards received an offer of work from the offending vendor. If a vendor came to me in response to a negative review and genuinely asked for help in improving their products, I would be happy to oblige. I am not, however, open to bribes.
After turning down GoodData’s suspicious offer, I finally heard from Ms. Ryan. She seemed quite nice, but she knows little about data visualization, which she readily admitted. Why then is she a Data Visualization Research Director and why does she advise organizations about their use of data visualization? And, even more fundamentally, how is she qualified to develop a Data Visualization Competency Center? She isn’t. This concerns me because Ms. Ryan isn’t unique. Most of the people who set themselves up as data visualization leaders and advisors are not qualified. It has become business as usual to make fraudulent claims. This won’t change if we fail to expose them.
August 6th, 2015
While being briefed on a product earlier this week, the company’s founder and I agreed on one point only: most of the people who are currently tasked with data analysis lack the skills that are required to do the work. He and I, however, imagine conflicting solutions to this problem. He believes that technology must come to the rescue by doing the work for these people who can’t do it for themselves. I believe that even the best technologies cannot do the work of skilled data analysts and that the problem can only be effectively addressed by helping people develop analytical skills. He agreed that equipping people with the necessary skills would work better, but dismissed it because it is not a “scalable solution.” The essence of his case went something like this: “Data is increasing at an exponential rate, so our need for analytics cannot be solved by investing in human resources because humans are not sufficiently scalable, but technologies are.” Consider this line of reasoning for a moment. It relies on the following premise: “Exponential increases in data can only be addressed by exponential increases in analytical horsepower.” This premise is fallacious. Nate Silver made this point in his book The Signal and the Noise when he wrote:
If the quantity of information is increasing by 2.5 quintillion bytes per day, the amount of useful information certainly isn’t. Most of it is just noise, and the noise is increasing faster than the signal. There are so many hypotheses to test, so many data sets to mine—but a relatively constant amount of objective truth.
The exponential growth in raw data that we’re experiencing is mostly producing noise. The amount of useful information is not increasing exponentially, therefore the need for analytical horsepower is also not increasing exponentially. Data sensemaking is a human activity that can at best be augmented and assisted by analytical tools. The only viable solution to the analytical challenges that we face is to develop the human resources that we need. This is where our attention and our investments should be focused. Don’t trust a technology vendor who claims that skilled data analysts can be replaced with his product. That analytical product does not exist.
This company’s founder claims that his product can analyze a data set and present all of the potentially useful findings in a series of simple graphs and plain English explanations without any human involvement. During the briefing, he made an off-the-cuff comment that caused the hairs on the back of my neck to bristle. He said that his product “empowers users.” He must understand empowerment quite differently than I do. As I understand it, empowerment involves an increase in ability. Software that does for you what you could do better yourself with proper training isn’t empowering.
This fellow’s notion of empowerment bothered me because I work hard to actually empower people by teaching them analytical skills. I know how much it means to people to become truly empowered with useful abilities that enable them to affect the world in beneficial ways. No one with an ounce of integrity wants to bear the title “data analyst” while doing nothing but delivering a computer’s findings to someone else without adding any value. If this is the future that analytics technologies promise, count me out. Fortunately, this isn’t a future that technologies are likely to achieve.
July 28th, 2015
Morality is a function of the brain. When we make a distinction between matters of the heart and head, we are in fact distinguishing two modes of thinking that take place in our brains: System 1 thinking, which is fast, emotional, and intuitive (heart), and System 2 thinking, which is slow, rational, and deliberative. Daniel Kahneman beautifully describes these two modes of thinking in his book Thinking, Fast and Slow. Both modes are useful, but they are best suited for different tasks. In some situations we should go with our hearts (or guts, as in “gut feelings”), and some require the higher-order rational thinking that the prefrontal cortex (PFC) evolved to handle. In his book Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Joshua Greene, who heads the Moral Cognition Lab in Harvard University’s department of psychology, explains how this distinction applies to morality.
The moral sense that we feel in our guts and experience intuitively is a product of System 1 thinking. Some things just feel wrong and others just feel right. This moral sense evolved in our species to enable cooperation within groups. Social cooperation created an Us (our tribe) that could better compete against Them (other tribes). This automatic sense of morality takes on somewhat different forms from tribe to tribe (i.e., cultural groups, including distinct subcultures within societies, such as liberals and conservatives), but it is largely universal in nature, dissuading us from cheating our neighbors and killing our friends. Because it evolved to help us compete against other groups to give us an advantage for propagating our own kind, this moral sense pits Us against Them in a way that complicates matters in the modern world. The kind of morality that is needed to embrace a global Us is a product of System 2 thinking. Just as we need to know when to shift into System 2 thinking to solve our personal and group problems, we must do the same to solve our global problems by creating a shared metamorality for the modern world.
In his book Moral Tribes, Joshua Greene explains how morality evolved, how it works in our brains, and how it can be shaped in rational ways to enable our species and the world that we share to survive and perhaps even flourish. Here’s an excerpt from the book:
The human brain is like a dual-mode camera with both automatic settings and a manual mode. A camera’s automatic settings are optimized for typical photographic situations (“portrait,” “action,” “landscape”). The user hits a single button and the camera automatically configures the ISO, aperture, exposure, et cetera — point and shoot. A dual-mode camera also has a manual mode that allows the user to adjust all of the camera’s settings by hand. A camera with both automatic settings and a manual mode exemplifies an elegant solution to a ubiquitous design problem, namely the trade-off between efficiency and flexibility. The automatic settings are highly efficient, but not very flexible, and the reverse is true of the manual mode. Put them together, however, and you get the best of both worlds, provided that you know when to manually adjust your settings and when to point and shoot.
The rational means that Greene proposes as the basis for System 2 (manual camera mode) moral thinking is an old philosophy with an unfortunate name: utilitarianism. Despite the name, utilitarianism doesn’t frame life in cold, mechanistic terms, but strives to achieve the greatest life experiences for the most people possible without partiality. It is its impartiality that allows us to exceed the boundaries of our separate tribes. This 19th century philosophy of Jeremy Bentham and John Stuart Mill offers new hope for our species to shape a truly moral world.
Moral Tribes is an important book. This is not merely because it is thoughtful, well written, and innovative, but also because it teaches a lesson that we desperately need to learn. Despite tremendous historical strides in reducing violence in our world, our potential for doing harm to the earth and one another due to the power of our modern technologies is far greater than in the past. We who work with information technologies dare not ignore the concerns of morality by compartmentalizing them as irrelevant to our work. What we do with information has a moral dimension that is considered far too seldom. The moral thinking that we need today is not the morality of our forebears. We owe it to future generations to get this right.