Thanks for taking the time to read my thoughts about Visual Business Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions that are either too urgent to wait for a full-blown article or too limited in length, scope, or development to require the larger venue. For a selection of articles, white papers, and books, please visit my library.

 

Logarithmic Confusion

March 21st, 2018

We typically think of quantitative scales as linear, with equal quantities from one labeled value to the next. For example, a quantitative scale ranging from 0 to 1000 might be subdivided into equal intervals of 100 each. Linear scales seem natural to us. If we took a car trip of 1000 miles, we might imagine that distance as subdivided into ten 100 mile segments. It isn’t likely that we would imagine it subdivided into four logarithmic segments consisting of 1, 9, 90, and 900 mile intervals. Similarly, we think of time’s passage—also quantitative—in terms of days, weeks, months, years, decades, centuries, or millennia; intervals that are equal (or in the case of months, roughly equal) in duration.

Logarithms and their scales are quite useful in mathematics and at times in data analysis, but they are only useful for presenting data on those relatively rare cases when addressing an audience that consists of those who have been trained to think in logarithms. With training, we can learn to think in logarithms, although I doubt that it would ever come as easily or naturally as thinking in linear units.

For my own analytical purposes, I use logarithmic scales primarily for a single task: to compare rates of change. When two time series are displayed in a line graph, using a logarithmic scale allows us to easily compare the rates of change along the two lines by comparing their slopes, for equal slopes represent equal rates of change. This works because units along a logarithmic scale increase by rate (e.g., ten times the previous value for a log base 10 scale or two times the previous value for a log base 2 scale), not by amount. Even in this case, however, I would not ordinarily report to others what I’d discovered about rates of change using a graph with a logarithmic scale, for all but a few people would misunderstand it.

I decided to write this blog piece when I ran across the following graph in Steven Pinker’s new book Enlightenment Now:

The darkest line, which represents the worldwide distribution of per capita income in 2015, is highlighted as the star of this graph. It has the appearance of a normal, bell-shaped distribution. This shape suggests an equitable distribution of income, but look more closely. In particular, notice the income scale along the X axis. Although the labels along the scale do not consistently represent logarithmic increments—odd but never explained—the scale is indeed logarithmic. Had a linear scale been used, the income distribution would appear significantly skewed with a peak nearer to the lower end and a long declining tail extending to the right. I can think of no valid reason for using a logarithmic scale in this case. A linear scale ranging from $0 per day at the low end to $250 per day or so at the high end, would work fine. Ordinarily, $25 intervals would work well for a range of $250, breaking the scale into ten intervals, but this wouldn’t allow the extreme poverty threshold of just under $2.00 to be delineated because it would be buried within the initial interval of $0 to $25. To accommodate this particular need, tiny intervals of $2.00 each could be used throughout the scale, placing extreme poverty approximately within the first interval. As an alternative, larger intervals could be used and the percentage of people below the extreme poverty threshold could be noted as a number.

After examining Pinker’s graph closely, you might be tempted to argue that its logarithmic scale provides the advantage of showing a clearer picture of how income is distributed in the tiny $0 to $2.00 range. This, however, is not its purpose. Even if this level of detail regarding were relevant, the information that appears in this range isn’t real. The source data on which this graph is based is not precise enough to represent how income in distributed between $0 and $2.00. If reliable data existed and we really did need to clearly show how income is distributed from $0 to $2.00, we would create a separate graph to feature that range only and that graph would use a linear scale.

Why didn’t Pinker use a linear scale? Perhaps it is because the message of the graph would reveal a dark side that would somewhat undermine the message of his book that the world is getting better. Although income has increased overall, the distribution of income has become less equitable and this pattern persists today.

When I noticed that Pinker derived the graph from Gapminder and attributed it to Ola Rosling, I decided to see if Pinker introduced the logarithmic scale or inherited it in that form from Gapminder. Upon checking, I found that Gapminder’s graphs of wealth distribution indeed feature logarithmic scales. If you go to the part of Gapminder’s website that allows you to use their data visualization tools, you’ll find that you can only view the distribution of wealth logarithmically. Even though some of Gapminder’s graphs provide the option of switching between linear and logarithmic scales, those that display distributions of wealth do not. Here’s the default wealth-related graph that can be viewed using Gapminder’s tool:

This provides a cozy sense of bell-shaped equity, which isn’t truthful.

To present data clearly and truthfully, we must understand what works for the human brain and design our displays accordingly. People don’t think in logarithms. For this reason, it is usually best to avoid logarithmic scales, especially when presenting data to the general public. Surely Pinker and Rosling know this.

Let me depart from logarithms to reveal another problem with these graphs. There is no practical explanation for the smooth curves that they exhibit if they’re based on actual income data. The only time we see smooth distribution curves like this is when they result from mathematical calculations, never when they’re based on actual data. Looking at the graph above, you might speculate that when distribution data from each country was aggregated to represent the world as a whole, the aggregation somehow smoothed the data. Perhaps that’s possible, but that this isn’t what happened here. If you look closely at the graph above, in addition to the curves at the top of each of the four colored sections, one for each world region, there are many light lines within each colored section. Each of these light lines represents a particular country’s distribution data. With this in mind, look at any one of those light lines. Every single line is smooth beyond the practical possibility of being based on actual income data. Some jaggedness along the lines would always exist. This tells us that these graphs are not displaying unaltered income data for any of the countries. What we’re seeing has been manipulated in some manner. The presence of such manipulation always makes me wary. The data may be a far cry from the actual distribution of wealth in most countries.

My wariness is magnified when I examine wealth data of this type from long ago. Here’s Gapminder’s income distribution graph for the year 1800:

To Gapminder’s credit, they provide a link above the graph labeled “Data Doubts,” which leads to the following disclaimer:

Income data has large uncertainty!

There are many different ways to estimate and compare income. Different methods are used in different countries and years. Unfortunately no data source exists that would enable comparisons across all countries, not even for one single year. Gapminder has managed to adjust the picture for some differences in the data, but there are still large issues in comparing individual countries. The precise shape of a country should be taken with a large grain of salt.

I would add to this disclaimer that “The precise shape of the world as a whole should be taken with an even larger grain of salt.” This data is not reliable. If the data isn’t reliable today, data for the year 1800 is utterly unreliable. As a man of science, Pinker should have made this disclaimer in his book. The claim that 85.9% of the world’s population lived in extreme poverty in 1800 compared to only 11.4% today makes a good story of human progress, but it isn’t a reliable claim. Besides, it’s hard to reconcile my reading of history with the notion that, in 1800, all but 14% of humans were just barely surviving from one day to the next. People certainly didn’t live as long back then, but I doubt that the average person was living well below the threshold of extreme poverty as this graph suggests.

I’ve grown concerned that the recent emphasis on data storytelling has led to a reduction in clear and accurate truth telling. When I was young, to say that someone “told stories” meant that they made stuff up. This negative connotation of storytelling describes a great deal of data storytelling today. Encouraging people to develop skills in data sensemaking and communication should focus their efforts on learning how to discover, understand, and tell the truth. This is seldom how instruction in data storytelling goes. The emphasis is more often on persuasion than truth, more on art (and artifice) than science.

Randomness Is Often Not Random

March 12th, 2018

In statistics, what we often identify as randomness in data is not actually random. Bear in mind, I am not talking about randomly generated numbers or random samples. Instead, I am referring to events about which data has been recorded. We learn of these events when we examine the data. We refer to an event as random when it is not associated with a discernible pattern or cause. Random events, however, almost always have causes. We just don’t know them. Ignorance of cause is not the absence of cause.

Randomness is sometimes used as an excuse for preventable errors. I was poignantly reminded of this a decade or so ago when I became the victim of a so-called random event that occurred while undergoing one of the most despised medical procedures known to humankind: a colonoscopy. In my early fifties at the time, it was my first encounter with this dreaded procedure. After this initial encounter, which I’ll now describe, I hoped that it would be my last.

While the doctor was removing one of five polyps that he discovered during his spelunking adventure into my dark recesses, he inadvertently punctured my colon. Apparently, however, he didn’t know it at the time, so he sent me home with the encouraging news that I was polyp free. Having the contents of one’s colon leak out into other parts of the body isn’t healthy. During the next few days severe abdominal pain developed and I began to suspect that my 5-star rating was not deserved. Once admitted to the emergency room at the same facility where my illness was created, a scan revealed the truth of the colonoscopic transgression. Thus began my one and only overnight stay so far in a hospital.

After sharing a room with a fellow who was drunk out of his mind and wildly expressive, I hope to never repeat the experience. Things were touch and go for a few days as the medical staff pumped me full of antibiotics and hoped that the puncture would seal itself without surgical intervention. Had this not happened, the alternative would have involved removing a section of my colon and being fitted with a stylish bag for collecting solid waste. To make things more frightening than they needed to be, the doctor who provided this prognosis failed to mention that the bag would be temporary, lasting only about two months while my body ridded itself of infection, followed by another surgery to reconnect my plumbing.

In addition to a visit from the doctor whose communication skills and empathy were sorely lacking, I was also visited during my stay by a hospital administrator. She politely explained that punctures during a routine colonoscopy are random events that occur a tiny fraction of the time. According to her, these events should not to be confused with medical error, for they are random in nature, without cause, and therefore without fault. Lying there in pain, I remember thinking, but not expressing, “Bullshit!” Despite the administrator’s assertion of randomness, the source of my illness was not a mystery. It was that pointy little device that the doctor snaked up through my plumbing for the purpose of trimming polyps. Departing from its assigned purpose, the trimmer inadvertently forged a path through the wall of my colon. This event definitely had a cause.

Random events are typically rare, but the cause of something rare is not necessarily unknown and certainly not unknowable. The source of the problem in this case was known, but what was not known was the specific action that initiated the puncture. Several possibilities existed. Perhaps the doctor involuntarily flinched in response to an itch. Perhaps he was momentarily distracted by the charms of his medical assistant. Perhaps his snipper tool got snagged on something and then jerked to life when the obstruction was freed. Perhaps the image conveyed from the scope to the computer screen lost resolution for a moment while the computer processed the latest Windows update. In truth, the doctor might have known why the puncture happened, but if he did, he wasn’t sharing. Regardless, when we have reliable knowledge of several potential causes, we should not ignore an event just because we can’t narrow it down to the specific culprit.

The hospital administrator engaged in another bit of creative wordplay during her brief intervention. Apparently, according to the hospital, and perhaps to medical practice in general, something that happens this rarely doesn’t actually qualify as an error. Rare events, however harmful, are designated as unpreventable and therefore, for that reason, are not errors after all. This is a self-serving bit of semantic nonsense. Whether or not rare errors can be easily prevented, they remain errors.

We shouldn’t use randomness as an excuse for ongoing ignorance and negligence. While it makes no sense to assign blame without first understanding the causes of undesirable events, it also makes no sense to dismiss them as inconsequential and as necessarily beyond the realm of understanding. Think of random events as invitations to deepen our understanding. We needn’t make them a priority for responsive action necessarily, for other problems that are understood might deserve our attention more, but we shouldn’t dismiss them either. Randomness should usually be treated as a temporary label.

Big Data, Big Dupe: A Progress Report

February 23rd, 2018

My new book, Big Data, Big Dupe, was published early this month. Since its publication, several readers have expressed their gratitude in emails. As you can imagine, this is both heartwarming and affirming. Big Data, Big Dupe confirms what these seasoned data professionals recognized long ago on their own, and in some cases have been arguing for years. Here are a few excerpts from emails that I’ve received:

I hope your book is wildly successful in a hurry, does its job, and then sinks into obscurity along with its topic.  We can only hope! 

I hope this short book makes it into the hands of decision-makers everywhere just in time for their budget meetings… I can’t imagine the waste of time and money that this buzz word has cost over the past decade.

Like yourself I have been doing business intelligence, data science, data warehousing, etc., for 21 years this year and have never seen such a wool over the eyes sham as Big Data…The more we can do to destroy the ruse, the better!

I’m reading Big Data, Big Dupe and nodding my head through most of it. There is no lack of snake oil in the IT industry.

Having been in the BI world for the past 20 years…I lead a small (6 to 10) cross-functional/cross-team collaboration group with like-minded folks from across the organization. We often gather to pontificate, share, and collaborate on what we are actively working on with data in our various business units, among other topics.  Lately we’ve been discussing the Big Data, Big Dupe ideas and how within [our organization] it has become so true. At times we are like ‘been saying this for years!’…

I believe deeply in the arguments you put forward in support of the scientific method, data sensemaking, and the right things to do despite their lack of sexiness.

As the title suggests, I argue in the book that Big Data is a marketing ruse. It is a term in search of meaning. Big Data is not a specific type of data. It is not a specific volume of data. (If you believe otherwise, please identify the agreed-upon threshold in volume that must be surpassed for data to become Big Data.) It is not a specific method or technique for processing data. It is not a specific technology for making sense of data. If it is none of these, what is it?

The answer, I believe, is that Big Data is an unredeemably ill-defined and therefore meaningless term that has been used to fuel a marketing campaign that began about ten years ago to sell data technologies and services. Existing data products and services at the time were losing their luster in public consciousness, so a new campaign emerged to rejuvenate sales without making substantive changes to those products and services. This campaign has promoted a great deal of nonsense and downright bad practices.

Big Data cannot be redeemed by pointing to an example of something useful that someone has done with data and exclaiming “Three cheers for Big Data,” for that useful thing would have still been done had the term Big Data never been coined. Much of the disinformation that’s associated with Big Data is propogated by good people with good intentions who prolong its nonsense by erroneously attributing beneficial but unrelated uses of data to it. When they equate Big Data with something useful, they make a semantic connection that lacks a connection to anything real. That semantic connection is no more credible than attributing a beneficial use of data to astrology. People do useful things with data all the time. How we interact with and make use of data has been gradually evolving for many years. Nothing that is qualitatively different about data or its use emerged roughly ten years ago to correspond with the emergence of the term Big Data.

Although no there is no consensus about the meaning of Big Data, one thing is certain: the term is responsible for a great deal of confusion and waste.

I read an article yesterday titled “Big Data – Useful Tool or Fetish?” that exposes some failures of Big Data. For example, it cites the failed $200,000,000 Big Data initiative of the Obama administration. You might think that I would applaud this article, but I don’t. I certainly appreciate the fact that it recognizes failures associated with Big Data, but its argument is logically flawed. Big Data is a meaningless term. As such, Big Data can neither fail nor succeed. By pointing out the failures of Big Data, this article endorses its existence, and in so doing perpetuates the ruse.

The article correctly assigns blame to the “fetishization of data” that is promoted by the Big Data marketing campaign. While Big Data now languishes with an “increasingly negative perception,” the gradual growth of skilled professionals and useful technologies continue to make good uses of data, as they always have.


Take care,

P.S. On March 6th, Stacey Barr interviewed me about Big Data, Big Dupe. You can find an audio recording of the interview on Stacey’s website.

Different Tools for Different Tasks

February 19th, 2018

I am often asked a version of the following question: “What data visualization product do you recommend?” My response is always the same: “That depends on what you do with data.” Tools differ significantly in their intentions, strengths, and weaknesses. No one tool does everything well. Truth be told, most tools do relatively little well.

I’m always taken by surprise when the folks who ask me for a recommendation fail to understand that I can’t recommend a tool without first understanding what they do with data. A fellow emailed this week to request a tool recommendation, and when I asked him to describe what he does with data, he responded by describing the general nature of the data that he works with (medical device quality data) and the amount of data that he typically accesses (“around 10k entries…across multiple product lines”). He didn’t actually answer my question, did he? I think this was, in part, because he and many others like him don’t think of what they do with data as consisting of different types of tasks. This is a fundamental oversight.

The nature of your data (marketing, sales, healthcare, education, etc.) has little bearing on the tool that’s needed. Even the quantity of data has relatively little effect on my tool recommendations unless you’re dealing with excessively large data sets. What you do with the data—the tasks that you perform and the purposes for which you perform them—is what matters most.

Your work might involve tasks that are somewhat unique to you, which should be taken into account when selecting a tool, but you also perform general categories of tasks that should be considered. Here are a few of those general categories:

  • Exploratory data analysis (Exploring data in a free-form manner, getting to know it in general, from multiple perspectives, and asking many questions to understand it)
  • Rapid performance monitoring (Maintaining awareness of what’s currently going on as reflected in a specific set of data to fulfill a particular role)
  • A routine set of specific analytical tasks (Analyzing the data in the same specific ways again and again)
  • Production report development (Preparing reports that will be used by others to lookup data that’s needed to do their jobs)
  • Dashboard development (Developing displays that others can use to rapidly monitor performance)
  • Presentation preparation (Preparing displays of data that will be presented in meetings or in custom reports)
  • Customized analytical application development (Developing applications that others will use to analyze data in the same specific ways again and again)

Tools that do a good job of supporting exploratory data analysis usually do a poor job of supporting the development of production reports and dashboards, which require fine control over the positioning and sizing of objects. Tools that provide the most flexibility and control often do so by using a programming interface, which cannot support the fluid interaction with data that is required for exploratory data analysis. Every tool specializes in what it can do well, assuming it can do anything well.

In addition to the types of tasks that we perform, we must also consider the level of sophistication to which we peform them. For example, of you engage in exploratory data analysis, the tool that I recommend would vary significantly depending on the depth of your data analysis skills. For instance, I wouldn’t recommend a complex statistical analysis product such as SAS JMP if you’re untrained in statistics, just as I wouldn’t recommend a general purpose tool such as Tableau Software if you’re well trained in statistics, except for performing statistically lightweight tasks.

Apart from the tasks that we perform and the level of skill with which we perform them, we must also consider the size of our wallet. Some products require a significant investment to get started, while others can be purchased for an individual user at little cost or even downloaded for free.

So, what tool do I recommend? It depends. Finding the right tool begins with a clear understanting of what you need to do with data and with your ability to do it.

Take care,

Introducing www.Stephen-Few.com

December 27th, 2017

I’ve ended my public Visual Business Intelligence Workshops and quarterly Visual Business Intelligence Newsletter, in part, to make time for other ventures. You have perhaps noticed that here, in my Perceptual Edge blog articles, I sometimes veer from data visualization to reflect my broader interests. In this blog, I’ve usually tried to my keep topics at least tangentially related to data sensemaking, but I now find this too confining. Going forward, I’d like to release the reins and write about any topic that might benefit from my perspective. Rather than expanding the scope of Perceptual Edge for this purpose, however, I’ve created a new website—www.Stephen-Few.com—as a venue for all of my other interests.

If you’ve found my work useful in the past, you might find the blog on my new website useful as well. I promise, I won’t waste your time with self-indulgent articles. Most of these articles will address the following topics:

  • Ethics, especially ethical approaches to the development and use of information technologies
  • Critical thinking
  • Effective communication
  • Brain science
  • Scientific thinking
  • Skepticism
  • Deep learning

I will feel free, however, to venture beyond these when so inspired.

When I write about data visualization or other aspects of data sensemaking, I’ll continue to post those articles here in my www.PerceptualEdge.com blog as well. Other articles, however, will only be posted in my www.Stephen-Few.com blog.

To launch the new website, I posted my first blog article there today titled Beware Incredible Technology-Enabled Futures. In it, I expose the frightening nonsense of a new TED talk titled “Three Steps to Surviving the Robot Revolution,” by “data philosopher” and “serial entrepreneur” Charles Radclyffe.

Take care,