Thanks for taking the time to read my thoughts about Visual Business
Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions
that are either too urgent to wait for a full-blown article or too
limited in length, scope, or development to require the larger venue.
For a selection of articles, white papers, and books, please visit
July 18th, 2016
Expertise isn’t what it used to be. Beginning with the industrial revolution and continuing into our modern information age, new technologies have altered our view of expertise and influenced the degree to which we pursue it. Technologies, properly understood, are tools that we humans create to augment our abilities. Good technologies are created and used by experts to extend their skills, not to replace them.
This relationship between experts and the technologies that they use was much clearer before the industrial revolution. Skilled craftspeople—carpenters, blacksmiths, cooks, farmers, and even accountants—cherished their tools and used them well. They understood which tools to use, when to use them, and how to use them productively. Their tools were a natural extension of their minds and bodies. Since the beginning of the industrial revolution, however, many tasks that were performed by humans in the past are now performed by machines. This is a mixed blessing. Some tasks can be performed better by machines, such as fast mathematical calculations done by an abacus, calculator, or computer. Some tasks that can be done better by people, can be done by machines more cheaply, so we sometimes sacrifice quality for affordability, such as when we buy a piece of manufactured furniture rather than paying a skilled craftsperson to build something better. Some tasks, however, can only be performed by humans and can at best be augmented by technologies. Data sensemaking and communication fall into this category.
What happens when technologies are used to do what only humans can do well? The outcomes are poor in comparison and people are discouraged from fully developing those skills. Similarly, what happens when the tools of a trade are designed by people who don’t understand the trade? Again, the outcomes suffer and people with expertise are frustrated in their efforts. If you were a warrior of bygone days preparing for battle, would you buy a sword made by someone who didn’t intimately understand its use in battle? Not if you wanted to survive. Most data sensemaking and communication tools are dull blades with slippery hilts.
My field of data visualization—the use of visual representations to explore, make sense of, and communicate quantitative data—falls into the broader category of knowledge work. Expertise in knowledge work can be difficult to assess. This is different from expertise playing the violin or performing gymnastics. Over hundreds of years, clear standards and measures of musical and athletic performance have been established, along with clear methods for developing expertise under the guidance of teachers, mentors, and coaches. Unlike these areas, methods for developing skill in data visualization are not firmly established. The field of data visualization is chaotic. We can’t even agree on a definition of the term, let alone determine what qualifies as expertise and the path to developing it. In my own mind and work, however, the field is clearly defined and the principles and practices, although neither complete nor fully formed, are firmly rooted in science and years of practical experience.
During the 20th century and so far in the 21st, we have watched in amazement as musicians and athletes have achieved what was previously thought impossible. Expertise in these realms has increased as each new generation built on the foundation of its forebears, coaxing their brains and bodies to reach new heights through increasingly advanced training regimens. In the 1908 Summer Olympics, a diver barely averted disaster when he attempted a double somersault, which was considered too dangerous, prompting recommendations that it be banned from competition. Today, the double somersault is an entry-level dive. Ten year olds can perform it perfectly and in high school the best divers are doing four and a half somersaults. Sadly, similar advancement is not happening among most knowledge workers. Are data sensemakers and communicators more skilled on average today than they were 50 years ago? I doubt it. In fact, it’s entirely possible that expertise in this realm has declined as technologies have displaced and discouraged the skilled efforts of humans.
The web has contributed to the problem. Despite its many benefits, the web has provided a convenient platform for inflated claims of expertise. In data visualization, the actual number of experts is but a small fraction of those who boldly make the claim in blogs. And now, as traditional book publishers are scrambling to remain viable, they eagerly offer book contracts to any blogger with a modest following. You can get a book published without first developing expertise in the subject matter. The book Data Visualization for Dummies by Mico Yuk is a vivid example. Apparently Wiley Press forgot that “for dummies” wasn’t meant to be taken literally.
Don’t claim expertise that you don’t possess. Never inflate your abilities. False claims do harm, even to yourself. If you believe that you’ve already reached the heights of achievement, you’ll spend your time demonstrating and proclaiming your minor achievements, rather than working to improve. You can only evaluate your own expertise by comparing yourself to experts, not to others with superficial knowledge and skills on par with your own. Forming a mutual admiration society built on mediocrity might feel good to its members, but it isn’t progress.
What’s especially disheartening about the current lack of expertise in data visualization is the fact that expertise is within reach. Expertise is not an exclusive club of the uniquely talented. We can all develop deep expertise in a chosen field. In every field of pursuit, we develop expertise in the same way: through a great deal of study and practice. The only natural talent that’s needed for developing expertise is one that we all possess: highly adaptable brains and bodies. We humans are the animal that learns. But if we trade this evolutionary advantage to instead become the animal that is shaped and limited by its tools, we won’t survive for long.
I recently wrote about deep work, the focused activity that is needed to perform at optimal levels. Now I’m talking about a kindred process, deep learning, which is needed to develop expertise. A wonderful new book titled Peak: Secrets from the New Science of Expertise, was recently written by one of the world’s great authorities on the topic, Anders Ericsson, with the help of science writer Robert Pool.
Ericsson, a professor of psychology at Florida State University, has been researching expertise for over 30 years. In Peak, Ericsson explains what expertise is, both in practice and in terms of brain development, and describes the universal gold standard of study and practice that is needed to achieve it, which he calls deliberate practice. I won’t steal his thunder by revealing the content of the book, except to say that he dispels the magical thinking about shortcuts to expertise. Deliberate practice is hard work and it takes a great deal of time. Not all practice is productive, leading to increased expertise. Deliberate practice involves guidance and feedback from existing experts. It takes advantage of the profound adaptability of our brains and bodies when pushed beyond our comfort zones in the right way and to the right degree. To whet your appetite, here’s a brief excerpt from the book:
But we now understand that there’s no such thing as a predefined ability. The brain is adaptable, and training can create skills…that did not exist before. This is a game changer, because learning now becomes a way of creating abilities rather than of bringing people to the point where they can take advantage of their innate ones. In this new world it no longer makes sense to think of people as born with fixed reserves of potential: instead, potential is an expandable vessel, shaped by the various things we do throughout our lives. Learning isn’t a way of reaching one’s potential but rather a way of developing it.
Expertise is potentially available to anyone who will commit to a prolonged and disciplined process of deliberate practice.
When I wrote my blog piece titled “Data Visualization Lite” not long ago, some of you might have thought that I was stroking my own ego by expressing dissatisfaction with the accomplishments of newcomers to the field. That wasn’t the case. I am genuinely discouraged by the paucity of good infovis research, by the redundancy and errors of most recent books on data visualization practices, and by the mediocre design and functionality of data visualization tools. I want us to do better and I know that we can, but not by doing business as usual. The rapid rise in the popularity of data visualization, which began a little over a decade ago, has done more harm than good. Popularity often breeds mediocrity. We must stuff socks in the mouths of marketers and shift the message from the gospel of salvation through technologies to an emphasis on human skills.
I’ve dedicated my professional life to the development of these skills, both in myself and in others. I want the efforts of others to surpass my own. I want to be left in the dust, rather than constantly looking back and yelling “This way. Hurry up!” This will take deep learning that builds on the best work that’s been done so far. There are no shortcuts. Expertise is the result of hard work, and at times it is no more fun than practicing the violin for several hours every day, but it’s worth it.
July 1st, 2016
Expertise is developed through “deep work.” This term was coined by Cal Newport to describe the highly-focused periods of concentration that are required, not just to develop expertise, but to do good work in almost any field of endeavor. He defines deep work as:
Professional activities performed in a state of distraction-free concentration to push cognitive capabilities to their limit. These efforts create new value, improve your skill, and are hard to replicate.
Achievement in almost all fields of endeavor, and especially in all forms of knowledge work, demands deep work. Most knowledge workers today spend their time drowning the shallows. Newport defines “shallow work” as:
Noncognitively demanding, logistical-style tasks, often performed while distracted. These efforts tend to not create much new value in the world and are easy to replicate.
Newport, an assistant professor of computer science at Georgetown University and the author of the blog Study Hacks as well as the book So Good They Can’t Ignore You, has written a new book titled Deep Work: Rules for Focused Success in a Distracted World. Reading Deep Work this week was a perfect continuation and expansion of the thoughts that I expressed in my blog on June 13, “Data Visualizaton Lite,” for a lack of deep work is probably the reason why most recent work in the field of data visualization is splashing around in the shallows.
This book is based on the following hypothesis about deep work:
The ability to perform deep work is becoming increasingly rare at exactly the same time it is becoming increasingly valuable in our economy. As a consequence, the few who cultivate this skill, and then make it the core of their working life, will thrive.
I believe that Newport’s hypothesis is valid. Over the years I have written and spoken many times about the importance of slowing down and thinking deeply, over an extended period of time, as the path to understanding and also to a fulfilling professional life. I work hard to create space for regular deep work. It is for this reason that I have always avoided all forms of social media (Facebook, LinkedIn, Twitter, etc.), for it offers me little compared to the overwhelming distraction that it would create. During much of my professional life I struggled to carve out opportunities for deep work in organizations that were not designed to support it. Founding Perceptual Edge made it possible for me to design a workplace and schedule that supports the deep work that makes me happy and productive.
In this book, Newport explains what deep work involves, why it’s important, what makes it so difficult to experience among today’s knowledge workers, and how these difficulties can be overcome. Newport’s insight that deep work is needed is not new, but what he’s done with this book is. He has exposed the problem and prescribed its remedy in a way that perfectly fits our current, technologically “connected” world. If you struggle to reach your cognitive potential as your mind flits from shallow thought to shallow thought, frenetically busy but not productive, I recommend that you read Deep Work.
June 28th, 2016
My revision of Alexander Popes words, “To err is human, to forgive, divine,” is not meant to diminish the importance of forgiveness, but instead to promote the great value of errors as learning opportunities. We don’t like to admit our mistakes, but it’s important that we do. We all make errors in droves. Failing to admit and learn from private errors may harm no one but ourselves, but this failure has a greater cost when our errors affect others. Acknowledging public errors, such as errors in published work, is especially important.
I was prompted to write this by a recent email exchange. I heard from a reader named Phil who questioned a graph that appeared in an early printing of my book Information Dashboard Design (First Edition). This particular graph was part of a sales dashboard that I designed to illustrate best practices. It was a horizontal bar graph with two scales and two corresponding series of bars, one for sales revenues and one for the number of units sold. It was designed in a way that inadvertently encouraged the comparison of revenues and unit counts in a way that could be misleading (see below).
I would not design a graph in this manner today, but when I originally wrote Information Dashboard Design in 2005, I had not yet thought this through. This particular graph was further complicated by the fact that the scale for units was expressed in 100s (e.g., a value of 50 on the scale represented 5,000), which was a bit awkward to interpret. I fixed the dual-scale and units problem in the book long ago (see below).
I began my response to Phil’s email with the tongue-in-cheek sentence, “Thanks for reminding me of past mistakes.” I had forgotten about the earlier version of the sales dashboard and Phil’s reminder made me cringe. Nevertheless, I admitted my error to him and now I’m admitting it to you. I learned from this error long ago, which relinquishes most of this admission’s sting. Even had the error persisted to this day, however, I would have still acknowledged it, despite discomfort, because that’s my responsibility to readers, and to myself as well.
When, in the course of my work in data visualization, I point out errors in the work of others, I’m not trying to discourage them. Rather, I’m firstly hoping to counter the ill affects of those errors on the public and secondly to give those responsible for the errors an opportunity to learn and improve. This is certainly the case when I critique infovis research papers. I want infovis research to improve, which won’t happen if poor papers continue to be published without correction. This was also the case when I recently expressed my concern that most of the books written about data visualization practices in the last decade qualify as “Data Visualization Lite.” I want a new generation of data visualization authors and teachers to carry this work that I care about forward long after my involvement has cease. I want them to stand on my shoulders, not dangle precariously from my belt.
Imagine how useful it would be for researchers to publish follow-ups to their published papers a few years after they’re published. Researchers could correct errors and describe what they’ve learned since publication. They could warn readers to dismiss claims that have since been shown invalid. They could describe how they would redesign the study if they were doing it again. This could contribute tremendously to our collective knowledge. How often, however, do authors of research papers ever mention previous work, except briefly in passing? What if researchers were required to maintain an online document that is linked to their published papers, to record all subsequent findings affecting the content of the original paper. As it is now, bad research papers never die. Most are soon forgotten, assuming they were ever noticed in the first place, but they’re often kept alive for many years through citations, even when they’ve been deemed unreliable.
A similar practice could be followed by authors of books. Authors sometimes do this to some degree when they write a new edition of a book. Two of my books are now in their second editions. Most of the changes in my new editions involve additional content and updated examples, but I’ve corrected a few errors as well. Perhaps I should have included margin notes in my second editions to point out content that was changed since the first to correct errors. This might be distracting for most readers, however, especially those who hadn’t read the previous edition, but I could provide a separate document on my website of those corrections for anyone who cares. Perhaps I will in the future.
Errors are our friends if we develop a healthy relationship with them. This relationship begins with acceptance, continues through correction, and lives on in the form of better understanding. Those who encourage this healthy relationship by opening their work to critique and by critiquing the work of others are likewise our friends. If I’ve pointed out errors in your work, I’m not your enemy. If you persist in spreading errors to the world despite correction, however, you become an enemy to your readers.
Data visualization matters. It isn’t just a job or field of study, it’s a path to understanding, and understanding is our bridge to a better world.
June 24th, 2016
When we need advice in our personal lives, to whom do we turn? To someone we trust, who has our interests at heart and is wise. So why then do we often rely on advisers in our professional lives whose interests are in conflict with our own? If your work involves business intelligence, analytics, data visualization, or the like, from whom do you seek advice about products and services? If you’re like most professionals, you unwittingly seek advice from people and organizations with incentives to sell you something. You either get advice from the vendors themselves, from technology analysts with close ties to those vendors, or from journalists who are secretly compensated by those vendors. That’s not rational, so why do we do it? Usually because it’s convenient and sometimes because we don’t really care if the advice is good or not, for it is our employers, not us, who will suffer the consequences. If we actually care, however, we should do a better job of vetting our advisers.
It should be obvious that we cannot expect objectivity from the vendors themselves. Even when a vendor’s employees post advice from independent websites and claim that their opinions are their own, they remain loyal to their employers. In fact, it’s a great marketing ploy for vendors to have their employees post advice from independent sites rather than from their own. It suggests a level of objectivity that serves the vendor’s interests and multiples their presence on the web. We must also question with similar suspicion the objectivity of consultants and teachers who have built their work around a single product.
What about technology analyst groups, such as Gartner, Forrester, and TDWI, to name a few of the big guys? These organizations fail in many ways to maintain a healthy distance from the very technology vendors that are the subject of their advice. In fact, they are downright cozy with the vendors.
Trustworthy technology advisers go to great pains to maintain objectivity. They are few and far between. To be objective, I believe that advisers should do the following:
- Disclose all of their relationships with vendors. This is especially true of relationships that involve the exchange of money. If they accept money from vendors, they should willingly disclose the figures upon request.
- Do not allow vendors to advertise on their websites, in their publications, or at their events.
- Only accept payments from vendors for professional services specifically rendered to improve the vendor’s products or services. Payments for marketing advice does not qualify.
- Do not publish content prepared by vendors.
Try to find technology analysts and journalists who follow these guidelines. Even with diligent effort, you won’t find many, because there aren’t many to find.
Try an experiment. If your company subscribes to one of the big technology analyst services (Gartner, etc.), next time they produce a report that scores BI, analytics, or data visualization products, ask them for a copy of the data on which they based those scores, along with the algorithms that processed the data. This is likely done in an Excel spreadsheet, so just ask them for a copy of the file. After making the request, watch them squirm and expect creative excuses. Most likely they’ll say something along these lines: “Our scoring system is based on a sophisticated and proprietary algorithm that we cannot make public because it gives us an edge over the competition.” Bullshit. There is definitely a secret in that spreadsheet that they don’t want to share, but it is not a sophisticated algorithm.
After they refuse to show their work, move on to the following request: “Please give me a list of the vendors that you evaluated along with the amount of money that you have received from each for the last few years.” They won’t give it to you, of course, and they’ll explain that they cannot for reasons of confidentiality. Think about that for a moment. It is no doubt true that they promised to never reveal the money that changed hands between them and the vendors, but shouldn’t this clear conflict of interest be subject to scrutiny? Technology analysts and the vendors that they support are not fans of transparency.
There are a few technology advisers who do good work and do it with integrity. If you want objective and expert advice from someone who is looking out for your interests, be sure to vet your advisers with diligence and care. Question their motives. If it looks like they’re acting as an extension of vendor marketing efforts, they probably are. If, on the other hand, you’re just looking for easy answers, abandon all skepticism and do a quick Google search and then read the advice that receives top ranking. Or, better yet, schedule a call with the analyst group for whose advice you pay dearly in the form of an annual subscription.
(Postscript: Yes, I consider myself one of the few data visualization advisers in whom you can trust.)
June 13th, 2016
In the world of data visualization, we are progressing at a snail’s pace. This is not the encouraging message that vendors and many “experts” are promoting, but it’s true. In the year 2004, I wrote the first edition of Show Me the Numbers in response to a clear and pressing need. At the time no book existed that pulled together the principles and best practices of quantitative data presentation and made them accessible to the masses of mostly self-trained people who work with numbers. I was originally inspired by the work of Edward Tufte, but realized that his work, exceptional though it was, awed us with a vision of what could be done without actually showing us how to do it. After studying all of the data visualization resources that I could find at the time, I pulled together the best of each, combined it with my own experience, gave it a simple and logical structure, and expressed it comprehensibly in accessible and practical terms. At that time, data visualization was not the hot topic that it is today. Since then, as the topic has ignited the imagination of people in the workplace and become a dominant force on the web, several books have been written about quantitative data presentation. I find it disappointing, however, that almost nothing new has been offered. With few exceptions, most of the books that have been written about data visualization, excluding books about particular tools or specific applications (e.g., dashboard design), qualify as data visualization lite.
Those books written since 2004 that aren’t filled with errors and poor guidance, with few exceptions, merely repeat what has been written previously. Saying the same old thing in a new voice is not helpful unless that new voice reaches an audience that hasn’t already been addressed or expresses the content in a way that is more informative. Most of the new voices are addressing data visualization superficially, appealing to an audience that desires skill without effort. As such, they dangle a false promise before the eager eyes of lazy readers. Data visualization lite is not a viable solution to the world’s need for clear and accurate information. Instead, it is a compromise tailored to appeal to short attention spans and a desire for immediate expertise, which isn’t expertise at all.
In a world that longs for self-service business intelligence, naively placing data sensemaking and communication in the same category as pumping gas, we need fresh voices to proclaim the unpopular truth that these skills can only be learned through thoughtful training and prolonged practice. It is indeed true that many people in our organizations can learn to analyze and present quantitative data effectively, but not without great effort. We don’t need voices to reflect the spirit of our time; we need voices to challenge that spirit—voices of transformation. Demand depth. Demand lessons born of true expertise. Demand evidence.
Where are these fresh and courageous voices? Who will light the way forward? There are only a few who are expressing new content, addressing new audiences, or expressing old content in new and useful ways. Until we demand more thoughtful and transformative work, the future of data visualization will be dim.