Thanks for taking the time to read my thoughts about Visual Business
Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions
that are either too urgent to wait for a full-blown article or too
limited in length, scope, or development to require the larger venue.
For a selection of articles, white papers, and books, please visit
February 6th, 2017
I began talking about finding and then telling the stories that reside in data 13 years ago, several years before “data storytelling” became a common expression and popular pursuit. Mostly, I was speaking of stories metaphorically. In some respects, I regret using this expression because, like many metaphors, its use has become overblown and misleading. I did not mean to suggest that stories literally reside in data, or, if I did, I was mistaken. Rather, facts reside in data from which stories can sometimes be woven. Literally speaking, storytelling involves a narrative presentation that consists of a beginning, middle, and end, along with characters, plots, and often dramatic tension. Data does not tell stories, people do.
Don’t be misled: data storytelling (i.e., the presentation of data in narrative form) makes up a tiny fraction of data visualization. The vast majority of data visualizations that we create present facts without weaving them into stories. Relatively few of the facts that we display in data visualizations lend themselves to storytelling. I’m not diminishing the usefulness of data storytelling, which can be incredibly powerful when appropriate and done well. I’m merely pointing out that data storytelling is not some new endeavor or skillset that dominates data visualization. It is a minor—but nonetheless important and useful—aspect of data visualization. Not everyone who works in the field of data visualization must be a skilled storyteller. In general, it’s more valuable to be skilled in data sensemaking and graphicacy, as well as a clear thinker and communicator, and to possess knowledge of the data domain.
When facts can indeed be woven into a story, however, do so if you know how. We love stories. They can breathe life into data. Just don’t try to impose a story on a set of facts to create life where it doesn’t exist.
January 25th, 2017
Tim Wu, the professor at Columbia Law School who coined the term “net neutrality,” is the author of an important and extraordinarily well-researched and well-written new book titled The Attention Merchants: The Epic Scramble to Get Inside Our Heads. In it, Wu traces the entire history of the technology-enabled work of merchants—those who wish to sell us something—to dominate our attention and, in so doing, to create demand for their products and services.
From the snake-oil salesmen of old to the more pervasive, less noticeable, and more effective methods that define today’s Web experience, this book takes us on a comprehensive and insightful journey through the entire history of advertising and explains how the efforts of attention merchants have not only created demands that artificially dominate our lives, but have also gotten into our heads in ways that have fundamentally changed who we are. Even though Wu’s concerned that attention-grabbing technologies have had ill effects, this book is not a screed. It exudes the even-handed tone of a scholar. Wu lays out the facts without preaching, but his concern is nonetheless evident and pressing.
A good life and the sensible decisions that enable it are hindered by the onslaught of distractions that dominate most of our attention today. The constant tug of social media, ubiquitously accompanied by increasingly targeted advertising content, leaves little room for reflection. It exercises its influence largely at an unconscious level.
As the world fills to overflowing with unremitting noise, our lives are impoverished. We have traded our attention for hollow promises of useful content and experiences: a Faustian bargain. The dominant attention merchants of our day, Web-based services such as Facebook and Google, have a self-serving agenda that is much different from ours, but that isn’t obvious without finding a moment of stillness in the eye of the storm. Several authors have written about the battle for our attention that is being waged against us today, including several whose books I’ve reviewed in this blog. Tim Wu has added significantly to this lexicon by telling the chilling story of attention merchants.
We can choose to opt-out of this Faustian bargain that we’ve inadvertently made, but it isn’t easy. By reading The Attention Merchants, you will learn about the main forces that have created this problem—all familiar names—and this knowledge will equip you for battle.
January 23rd, 2017
“Data Science” is a misnomer. Science, in general, is a set of methods for learning about the world. Specific sciences are the application of these methods to particular areas of study. Physics is a science: it is the study of physical phenomena. Psychology is a science: it is the study of the psyche (i.e., the human mind). There is no science of data.
Data is a collection of facts. Data, in general, is not the subject of study. Data about something in particular, such as physical phenomena or the human mind, provide the content of study. To call oneself a “data scientist” makes no sense. One cannot study data in general. One can only study data about something in particular.
Most people who call themselves data scientists are rarely involved in science at all. Instead, their work primarily involves mathematics, and usually the branch of mathematics called statistics. They are statisticians or mathematicians, not data scientists. A few years ago, Hal Varian of Google declared that “statistician” had become the sexy job of our data-focused age. Apparently, Varian’s invitation to hold up their heads in pride was not enough for some statisticians, so they coined a new term. When something loses its luster, what do you do? Some choose to give it a new name. Thus, statisticians become data scientists and data becomes “Big Data.” New names, in and of themselves, change nothing but perception; nothing of substance is gained. Only by learning to engage in data sensemaking well will we do good for the world. Only by doing actual good for the world will we find contentment.
So, you might be wondering why anyone should care if statisticians choose to call themselves data scientists, a nonsensical name. I care because people who strive to make sense of data should, more than most, be sensitive to the deafening noise that currently makes the knowledge that resides in data so difficult to find. The term “data scientist” is just another example of noise. It adds confusion to an overly and increasingly complicated world.
P.S. I realize that the term “data science” is only one of many misnomers that confuse the realm of data sensemaking. I myself am guilty of using another: “business intelligence.” This term is a misnomer (and an oxymoron as well) in that, as with data science, when it is practiced effectively, business intelligence is little more than another name for statistics. It has rarely been practiced effectively, however. Most of the work and products that bear the name business intelligence have delivered overwhelming mounds of data that is almost entirely noise.
January 12th, 2017
As its default mode of operation, the human brain uses the least amount of information necessary to make sense of the world before making decisions. This product of evolution was an efficient and effective strategy when we lived in a simple, familiar world. We no longer live in that world. We can still use this strategy to make sense of those aspects of our world that remain relatively simple and familiar (i.e., walking from point A to point B without tripping or falling into a hole), but we must use more advanced strategies when navigating the complex and/or unfamiliar. The default mode of thinking, which is intuitive, feeling-based, and fast, utilizing efficient heuristics (rules of thumb), is called System 1 thinking. The more advanced and more recently evolved mode of thinking, which is reflective, rational, and slow, is called System 2 thinking. Both are valid and useful. The trick is knowing when to shift from System 1 to System 2.
In my opinion, many of the problems that we suffer from today occur because we fail to shift from System 1 to System 2 when needed. For instance, electing the president of the most powerful nation on Earth requires System 2 thinking. That’s obvious, I suppose, but even such a mundane task as grocery shopping requires System 2 thinking to avoid choices that are fueled merely by marketing.
Defaults are automatic and largely unconscious. A single mode-of-thinking default doesn’t work when life sometimes requires System 1 and at other times requires System 2. Instead, rather than a default mode of thinking, we would benefit from a default of shifting into one or the other mode depending on the situation. This default doesn’t exist, but it could be developed, to an extent, through a great deal of practice over a great deal of time. Only by repeating the conscious act of shifting from System 1 to System 2, when necessary, over and over again, will we eventually reach the point where the shift will become automatic.
For now, we can learn to bring our mode of thinking when making decisions into conscious awareness and create the moments that are necessary to effect the System 1 to System 2 shift when it’s needed. Otherwise, we will remain the victims of hunter-gatherer thinking in a modern world that demands complex and sometimes unfamiliar choices, many of which come with significant, potentially harmful consequences. How do we make this happen? This is a question that deserves careful (i.e., System 2) study. One thing I can say for sure, however, is that we can learn to pause. The simple act of stopping and taking a moment to ask, “Is this one of those situations that, because it is complex or unfamiliar, requires reflection?”, is a good start.
January 5th, 2017
Last June I celebrated my 62nd birthday. As I look back on my life, my early years seem like distant memories of a different age, yet the years also seem to have flown by in an instant. Our lives are brief when superimposed on history, but they can be rich if we find a way to contribute to history. I feel that my work in the field of data visualization has provided that opportunity, and I’m incredibly grateful.
I have worked as an information technologist for 33 years. Similar to many other thoughtful IT professionals, I have a love-hate relationship with technologies. My feelings about them range from ecstasy to depression and disgust. I love technologies that are useful and work well, but I dislike all else, which includes most of the IT products on the market.
We humans are distinguished from other species in part by our creation and use of tools (a.k.a., technologies). Our relationship with these technologies has changed considerably since the hunter-gatherer days, especially since the advent of the computer. The human condition is increasingly affected for both good and ill by our technologies. We need to evaluate them with increasing awareness and moral judgment. We need to invite them into our lives and the lives of our children with greater care.
In the early days of my IT career, I spent a decade working in the world of finance. I was employed by one of the financial institutions that later contributed to the meltdown of 2007 and 2008. In fact, If I’m not mistaken, my employer invented the infamous reverse-interest mortgage loan. I was a manager in the loan service department at a time when a large group of employees had the job of explaining to customers why their loan balances were increasing. Fortunately, I never had to answer those questions myself, which I would have found intolerable.
During those years, I remember learning about the famous 80/20 rule (a.k.a., the Pareto Principle), but what I learned at the time was a perversion of the principle that says a lot about the culture in which I worked. I was told that the 80/20 rule meant that we should only work to satisfy 80% of the need, for the remaining 20% wasn’t worth the effort. When we built IT systems, we attempted to address only 80% of what was needed with tools that worked only 80% of the time. Excellence was never the goal; we sought “good enough.” But good enough for what? For most technology companies, the answer is “good enough to maximize revenues for the next few quarters.” A product that is only 80% good or less can be camouflaged for awhile by deceitful marketing. By the time customers discover the truth, it will be too late: their investment will have already been made and those who made it will never admit their error, lest they be held responsible.
Traditional theories of economics assume rational behavior. A relatively recent newcomer, Behavioral Economics, has shown, however, that human economic behavior is often far from rational. The same can be said of the human production of and use of technologies. When our progenitors became tool users and eventually tool creators, for eons those tools always arose from real need and they rarely caught on unless they worked. This is no longer true, especially of information technologies. Much that we do with computers today did not emerge in response to real needs, is often misapplied in ways that produce little or no benefit, and far too often works poorly, if at all. This suggests that a new scientific discipline may be needed to study these technologies to improve their usefulness and to diminish their waste and harmful effects. I propose that we call this new field of study Itology (i.e., IT-ology, pronounced eye-tology). Its focus would be on the responsible creation and use of information technologies. Whether the name “Itology” is adopted doesn’t matter, but making this area of study integral to IT certainly does.