Thanks for taking the time to read my thoughts about Visual Business Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions that are either too urgent to wait for a full-blown article or too limited in length, scope, or development to require the larger venue. For a selection of articles, white papers, and books, please visit my library.

 

Packed Bubbles Finally Make Sense

November 8th, 2016

I couldn’t understand it when Tableau Software introduced its packed bubble chart in version 8. It’s a useless form of display, assuming you care about the data. Today, however, I discovered what Tableau must have had in mind when they added this chart. The following example appears in an article that was published today about Tableau’s annual customer event, which is currently taking place in Austin:

Packed Bubble Elvis

Clearly, my inability to recognize the value of packed bubbles was a failure of imagination.

Take care,

Signature

Bad Science and the Fear of “Methodological Terrorism”

October 25th, 2016

A few days ago a data visualization developer friend of mine, Robert Monfera, sent me a link to a blog post titled “On methodological terrorism” by a thoughtful statistician named Robert Grant. Grant lays out an intelligent and entertaining case for speaking out against methodological flaws in scientific research—a practice that some on the receiving end characterize as “methodological terrorism.” He, I, and a growing number of others are speaking out to expose bad methodological practices in scientific research, not because we enjoy conflict, and certainly not because we’re assholes, but because bad science is always a waste of time and resources and it sometimes causes harm.

Grant wrote his blog post, in part, as a response to an article in a magazine of the Association for Psychological Science written by Susan Fiske, who decried the venomous nature of critiques and coined the term “methodological terrorism,” along with a few other bombastic terms, including “destructo-critics, “data police,” and “vigilante critique.” Fiske seems to be describing people such as Andrew Gelman, Ben Goldacre, Gerd Gigerenzer, and John Ioannides, whose work and integrity I greatly admire.

Here are a few of my favorite excerpts from Grant’s blog post:

If we view it as our civic duty to promote good research, it is also our civic duty not to tolerate bad research.

There is a corrupt system which you are obliged to end, and you will have to act outside the system to do so. Not by blowing up their offices…but by confronting their work when it is wrong, in the best scientific tradition, and refusing to go away until it is fixed.

Fiske [the scientist who coined the term “methodological terrorism”] said “it’s careers that are getting broken”; yes, that is precisely the objective. Acting out of ignorance, then seeing the light and fixing the problem is one thing, fighting not to change is another, and someone who refuses to learn and improve is not a scientist…

We should scare them all right, but in a thoroughly scientific way. It needs to be clear that nobody’s blunders are safe from being called out. We need to go after anyone and everyone, not just the big names.

The following excerpt from Grant’s blog post describes the circle-the-wagons resistance that the information visualization research community has exhibited in responses to my critiques:

The current system of a small number of the same people approving funding for studies, doing them, and editing the journals where they are published is arguably corrupt. The subject experts who run it benefit so much from it that they certainly don’t allow dissenting voices on their patch, and, unable to control self-publication on blogs and social media, react forcefully. Journals and conferences are used as an organ of repression, and we should focus on influencing them and not allowing them to be a refuge for irresponsible conduct.

Grant points out that researchers who cry foul in response to critiques of their work or the work of their communities tend to characterize those critiques as crossing the line into meanness. It is ironic that they often oppose these critiques through truly mean attempts at character assassination (“kill the messenger”) rather than rational discourse, which demonstrates the weakness of their position. Borrowing an analogy that was used during a keynote presentation last year by the designer Mike Monteiro, Grant likens the work of critics to that of dentists:

Now, consider the dentist. You pay them to tell it like it is. If your molar is rotten and has to come out, you want to hear it and have some straight-talking advice on what to do about it. You don’t enjoy hearing the news, but better now than later in agony. That is the service they provide — to tell you the facts, not to be your friend. We need to stop being friends of subject experts and start being their dentists instead.

Take a few minutes to read Grant’s blog post in full. If you care about science, you’ll find it worthwhile.

Take care,

Signature

“Should We?”: The Question That Is Rarely Asked

October 17th, 2016

The unique ability of the human brain to create technologies has taken us far. The benefits of technology, however, are not guaranteed, yet we celebrate and pursue them with abandon. When we imagine new technological abilities, we tend to ask one question only: “Can we?” “Can we create such a thing?” However, we’re good at creating what we can but shouldn’t. “Should we?”, though rarely asked, is the more important question by far.

I recently read a book by Samuel Arbesman, entitled Overcomplicated. I found it intriguing, yet also utterly frightening. Arbeson is Scientist in Residence at Lux Capital, a science and technology venture capital firm. He is a fine spokesperson for his employer’s interests, for he gives the technologies that make venture capitalists rich free license to do what they will by calling it inevitable.

Many modern technologies are now complicated in ways and to degrees that place them beyond our understanding. Arbesman accepts these over-complications as a given. In light of this, he proposes ways to study them that might yield a bit more understanding, even though, in his opinion, they will forever remain beyond our full grasp. He argues that modern technologies are like biological systems—the result of evolution rather than design—sometimes a mishmash of kluges embedded in millions of lines of programming code and sometimes the results of computers generating their own code with little or no human involvement. At no point in the book does Arbesman ask the question that was constantly screaming in my head as I read it: “Should we?” Should we create technologies that exceed our understanding and can therefore never be fully controlled? The only rational and moral answer to this question is “No, we shouldn’t.”

Arbesman assumes that we often cannot design and develop modern technologies in ways that remain within the reach of human understanding. Even though he acknowledges several examples of technologies that have created havoc because they were not understood, such as financial trading systems and power grids, he accepts these over-complications as inevitable.

As a technology professional of many years, I see things differently. These technological monsters that we create today as the products of kluges are over-complicated not because they cannot be kept within the realm of understanding and our control but because of poor, sloppy, undisciplined, and shortsighted design. Arbesman and others who pull the strings of modern information technologies want us to believe that these technologies are inherently and necessarily beyond human understanding, but this is a lie. Those who create these technologies are simply not willing to do the work that’s required to build them well.

We have a choice. We could demand better design. We could and should set the limits of human understanding as the unyielding boundary of our technologies. We can choose to only build what we can understand. This is harder than quickly and carelessly throwing together kluges or trusting algorithms to manage themselves, but it is a path that we must take to avoid destruction.

Arbesman advocates humility in the face of technologies that we cannot understand, but this is an odd humility, for it’s wrapped in hubris—a belief that we have the right to unleash on the world that which we can neither understand nor control. We may have this ability, but we do not have this right, for it is an almost certain path to destruction. Along with most of the technologists that he admiringly quotes in the book, Arbesman seems to embrace all information technologies that can be created as both inevitable and good—a reverence for Technology with a capital “T” that is both irrational and dangerous.

I’m certainly not the only technology professional who is concerned about this. Many share my perspective and express it, but our concerns are not backed by the deep pockets of technology companies, which currently set the agendas and shape the values of cultures throughout the developed world. The fear that our technologies could do great harm if left uncontrolled has been around for ages. This is a reasonable fear. In his film Jurassic Park, Steven Spielberg poignantly expressed this fear regarding biological technologies. There’s a great scene in the movie when a scientist played by the actor Jeff Goldblum asks the questions that we should always ask about potential technologies before we create and unleash them on the world. The scene accurately frames the problem as one that results from the selfishness of those who care only about their own immediate gains, never raising their eyes to look further into the future and never doubting the essential goodness of their creations, despite the monsters we are capable of creating.

Although this concern about unbridled technological development is occasionally expressed, it has had little effect on modern culture so far. Each of us who cares about the future of humanity and understands that the arc of technological development can be brought into line with the interests of humanity without sacrificing anything of real value should do what we can to voice our concerns. In your own organization, when an opportunity to create, modify, or uniquely apply a technology arises, you can ask, “Should we?” This might not be the path to popularity—those who choose to do good are often unappreciated for a time—but it is the only path that doesn’t lead to destruction. Be courageous, because you should.

Take care,

Signature

Examples of False Claims about Self-Service Analytics

September 12th, 2016

I recently wrote about The Myth of Self-service Analytics in this blog. Some of you seemed to think that I was exaggerating the claims that vendors make about self-service analytics, in particular that their tools eliminate the need for analytical skills. To back my argument, I’ve collected a few examples of these claims from several vendors.

Information Builders

Self-service BI and analytics isn’t just about giving tools to analysts; it’s about empowering every user with actionable and relevant information for confident decision-making. (link).

Self-service Analytics for Everyone…Who’s Everyone? Your entire universe of employees, customers, and partners. Our WebFOCUS Business Intelligence (BI) and Analytics platform empowers people inside and outside your organization to attain insights and make better decision. (link)

Qlik

Drive insight discovery with the data visualization app that anyone can use. With Qlik Sense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions.

Explore data with smart visualizations that automatically adapts to the parameters you set — no need for developers, data scientists or designers. (link)

Tableau

Analytics anyone can use. (link)

TIBCO Spotfire

The Spotfire Platform delivers self-service analytics to everyone in your company. (link)

Self-service analytics gives end users the ability to analyze and visualize their own data whenever they need to. (link)

SAP

This tool is intended for those who need to do analysis but are not Analysts nor wish to become them. (link)

Salesforce.com

Welcome to a new era of data visualization software. An era of self-service BI where instant access to insights wins the day time and time again. With Wave Analytics, now anyone can organize and present information in a much more intuitive way. Without a team of analysts. (link)

With self-service analytics, you can instantly slice and dice data on any device, without waiting for IT or analysts. (link)

ZoomData.com

Zoomdata brings the power of self-service BI to the 99%—the non-data geeks of the world who thirst for a simple, intuitive, and collaborative way to visually interact with data to solve business problems. (link)

Targit.com

TARGIT Decision Suite gives you self-service analytics solutions intuitive enough for the casual user… (link)

Take care,

Signature

Direct the Course of Evolution, or Perish

September 2nd, 2016

When evolution was purely biological, there were no reins to direct it, for evolution followed the course of nature. With homo sapiens, however, another form of evolution emerged that is exponentially faster—cultural evolution—which we can direct to some degree through deliberate choices. We haven’t taken the reins yet, however, and seldom even recognize that the reins exist, but we must if we wish to survive.

In the early days of our species, when our brains initially evolved the means to think abstractly, resulting in language and the invention of tools, we were not aware of our opportunity to direct our evolution. We are no longer naïve, or certainly shouldn’t be. We recognize and celebrate the power of our technologies, but seldom take responsibility for the effects of that power. Cultural evolution has placed within our reach not only the means of progress but also the means of regress. The potential consequences of our technologies have grown. Though we can choose to ignore these consequences and often work hard to do so, they’re now right up in our faces, screaming for attention.

Some of our technologies, beginning with the industrial revolution and continuing until now, contained seeds of destruction. Technologies that rely on fossil fuels, which contribute to global warming, are a prominent example. We can work to undo their harm either by (1) abstaining from their use, (2) developing new technologies to counter their effects, or (3) developing alternative technologies to replace them. When we create technologies, we should first consider their effects and proceed responsibly. We’re not doing this with information technologies. Instead, we embrace them without question, naively assuming that they are good, or at worst benign. Most information technologies provide potential benefits, but also potential harms.

Technologies that support data sensemaking and communication should be designed and used with care. We should automate only those activities that a computer can perform well and humans cannot. Effective data sensemaking relies on reflective, rational thought, resulting in understanding moderated by ethics. Computers can manipulate data in various ways but they cannot understand data and they have no concept of ethics. Computers should only assist in the data sensemaking process by augmenting our abilities without diminishing them.

You might think that I’m fighting to defend and preserve the dignity and value of humanity against the threat of potentially superior technologies. I care deeply about human dignity and the value of human lives, but these aren’t my primary motives. If we could produce a better world for our own and other species by granting information technologies free rein, I would heartily embrace the effort, but we can’t. By shifting more data sensemaking work to information technologies, as we are currently doing, we are inviting inferior results and a decline in human abilities.

Despite our many flaws, as living, sentient creatures we humans are able to make sense of the world and attempt to act in its best interests in ways that our information technologies cannot. We don’t always do this, but we can. Computers can be programmed to identify some of the analytical findings that we once believed only humans could discover, but they cannot perform these tasks as we do, with awareness, understanding, and care. Their algorithms lack the richness of our perceptions, thoughts, values, and feelings. We dare not entrust independent decisions about our lives and the world to computer algorithms.

We must understand our strengths and limitations and resist the temptation to create and rely on technologies to do what we can do better. We should not sit idly by as those who benefit from promoting information technologies without forethought do so simply because it is in their interests as the creators and owners of those technologies. No matter how well-intentioned technology companies and their leaders believe themselves to be, their judgments are deeply biased.

Technologies—especially information technologies—change who we are and how we relate to one another and the world. We are capable of thinking deeply about data when we develop the requisite skills, but we lose this capability when we allow computers to remove us from the loop of data sensemaking. The less we engage in deep thinking, the less we’re able to do it. So, we’re facing more than the problem that computers cannot fully reproduce the results of our brains; we’re also facing the forfeiture of these abilities if we cease to use them. By sacrificing these abilities, we would lose much that makes us human. We would devolve.

At any point in history, one question is always fundamental: “What are we going to do now?” We can’t change the past, but we must take the reins of the future. Among a host of useful actions, we must resist anyone who claims that their data sensemaking tools will do our thinking for us. They have their own interests in mind, not ours. Resistance isn’t futile; at least not yet.

Take care,

Signature