Thanks for taking the time to read my thoughts about Visual Business
Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions
that are either too urgent to wait for a full-blown article or too
limited in length, scope, or development to require the larger venue.
For a selection of articles, white papers, and books, please visit
October 21st, 2014
In the past few years, several fine books have been written by neuroscientists. In this blog I’ve reviewed those that are most useful and placed Daniel Kahneman’s Thinking, Fast & Slow at the top of the heap. I’ve now found its worthy companion: The Organized Mind: Thinking Straight in the Age of Information Overload.
This new book by Daniel J. Levitin explains how our brains have evolved to process information and he applies this knowledge to several of the most important realms of life: our homes, our social connections, our time, our businesses, our decisions, and the education of our children. Knowing how our minds manage attention and memory, especially their limitations and the ways that we can offload and organize information to work around these limitations, is essential for anyone who works with data.
This excerpt from the introduction will provides a sense of Levitin’s intention:
We humans have a long history of pursuing neural enhancement—ways to improve the brains that evolution gave us. We train them to become more dependable and efficient allies in helping us to achieve our goals…Through the sheer force of human ingenuity, we have devised system to free our brains of clutter, to help us keep track of details that we can’t trust ourselves to remember. All of these innovations are designed either to improve the brain we have, or to off-load some of its functions to external sources…It’s helpful to understand that our modes of thinking and decision-making evolved over the tens of thousands of years that humans lived as hunter-gatherers. Our genes haven’t fully caught up with the demands of modern civilization, but fortunately human knowledge has—we now better understand how to overcome evolutionary limitations. This is the story of how humans have coped with information and organization from the beginning of civilization. It’s also the story of how the most successful members of society…have learned to maximize their creativity, and efficiency, by organizing their lives so that they spend less time on the mundane, and more time on the inspiring, comforting, and rewarding things in life.
Levitin describes the nature of our so-called information age, including the many ways that work done by information specialists in the past has been transferred to us (for example, making our own travel arrangements rather than relying on the services of a travel agent), resulting in overwhelming cognitive demands. He shows how the coping strategy of multi-tasking is in fact an inefficient form of serial tasking—attentional switching—that provides an illusory sense of productivity. Many of life’s challenges require focus—extended periods of uninterrupted attention. We also need the replenishment of neural energy that the mind-wandering mode provides, where associations and insights can form while the mind soars freely.
How we sort through incoming data in rapid triage to separate urgent matters from others, how we categorize and store it for later retrieval, how we protect ourselves from the din of distracting noise, and how we assess facts when making decisions, are all skills that we can learn. Levitin’s advice is practical, lucid, and firmly rooted in an understanding of the brain.
When organizations are chasing the latest so-called Big Data technologies, it’s important to recognize that more and faster isn’t necessarily better, and in fact, is often worse, because of the ways that our brains are designed. If you’re involved in business intelligence, analytics, data visualization, data science, data storytelling, decision support, or whatever you choose the call the work of data sensemaking and communication, books like The Organized Mind are more useful to your success than a hundred books about specific software tools. Do yourself a favor and read this book.
October 3rd, 2014
In my dashboard design course, which I taught most recently last week in Portland, Oregon, it is inevitable that participants ask for help in gathering requirements for a useful dashboard. An entire weeklong course could easily be dedicated to the process of interacting with people to determine what they should be monitoring on a dashboard and how they should monitor it. Because my dashboard design course focuses on the visual design of dashboards, I spend little time on the important topic of requirements gathering. I know someone, however, whose work focuses on this important topic: Stacey Barr, the Performance Measurement Specialist.
Stacey and I became fast friends several years ago during my first visit to Australia. Stacey is based in Brisbane. I turn to Stacey when I have questions about performance measurement and she turns to me when she has questions about data visualization. For years I’ve been encouraging Stacey to put her ideas in a book and I’m thrilled to announce that, after a long labor of love, she has done so. Within a couple of weeks her book Practical Performance Measurement will be available through booksellers such as Amazon.
I haven’t received my copy yet, so I can’t comment on the content, but I’m confident that it will be worthwhile. One way to determine this for yourself is to attend a free webinar that Stacey will be conducting to introduce some of the book’s content. If you struggle with performance measurement, you should take advantage of this opportunity.
October 1st, 2014
During the last few months, I’ve been hit with a double barrel of distraction. Life sometimes gets in the way, but the path forward is opening up and I’m anxious to once again give my work the attention it deserves. Even though I haven’t been very active in this blog recently, things have been happening. My new book Signal is only a few days away from being ready for editing, but just as exciting are the two new advanced data visualization courses that I’ll be introducing in January and the addition of Nick Desbarats (pronounced like the name Deborah) as an educator and design consultant to the Perceptual Edge team.
Nick approached me a few months ago with a passion for data visualization, combined with a keen intellect, a fine pedigree of skills and experiences in decision support, and an insatiable curiosity. He’s also an accomplished business leader, having founded or co-founded three companies.
Even though I had long ago decided that I would never hire someone to teach my courses, Nick eventually opened my mind to the possibility. Nick’s addition to the team will solve a particular problem that many organizations have faced during the last few years since I began teaching public workshops: he will accept invitations to teach my courses privately. In recent years I’ve turned down many requests for private courses, preferring to use my limited time for teaching in public workshops where I can reach the broadest audience possible in the most efficient way. Nick is now available to teach Show Me the Numbers privately, and will soon offer my Information Dashboard Design and Now You See It courses as well. If your organization has at least 30 people that you’d like to have trained, email us at info@PerceptualEdge.com or call us at (510) 558-7400 to discuss the possibility.
As some of you have already noticed, we’ve opened registration for two new advanced courses on our Workshops page. On January 13 and 14, 2015 I’ll be teaching the two-day course Signal, which covers the content of my new book, in Berkeley, California. I’ll also be teaching it in Utrecht in the Netherlands on May 18 and 19, 2015 and in Sydney, Australia on November 10 and 12, 2015. Here’s an introduction to the content of Signal.
Many people assume that, if they know how to use data analysis software, this means that they are skilled data analysts. Knowledge of data analysis tools and possession of data analysis skills, however, are not the same. Even expert users of analytical tools frequently have little or no knowledge of fundamental yet powerful data analysis techniques.
Do you have the nagging sense that signals in your data—the things that matter most—might be slipping by unnoticed? Do you ever wonder if you’re wasting time tracking the wrong metrics or tracking the right metrics in the wrong ways? The data analysis that takes place in most organizations produces only a small fraction of its potential for useful insights. This is because most of the people who do this work have never been trained in data exploration and analysis beyond the basics, if at all. When presented with a new data set, do you know how to get the lay of the land—the context that’s necessary for analytical insights? Do you know how to separate signals from the noise?
At any given point in time, only a small portion of any organization’s data is useful for decision making. The rest is noise. Increases in data volume, velocity, and variety are actually more a problem than a benefit unless you know how to find and decipher signals buried in that growing haystack.
In this advanced data exploration and analysis course, I’ll take participants beyond the basic skills that are taught in my books Show Me the Numbers and Now You See It to the next level of statistical and data visualization skills that are required for signal detection. Only the signals matter.
The other new advanced course is Advanced Dashboard Design. I’ll be teaching this three-day course for the first time on January 27-29, 2015 in Berkeley, California, and later in London on March 4-6, 2015 (now open for registration) and in Sydney, Australia on November 10-12, 2015. This course is limited to 20 participants. Here’s a description:
Dashboards have now been a popular form of information display for over a decade, but relatively few of them live up to their potential. In my book Information Dashboard Design: Displaying Data for At-a-Glance Monitoring, Second Edition (2013), I introduce best practices for dashboard design that can be learned with relative ease. Competence, however, take more than reading a book; it takes experience that benefits from expert oversight and advice. This three-day course is specifically designed for dashboard designers who have already learned the basics and now want to extend and deepen their skills.
A series of short lectures and discussions cover advanced dashboard design topics, but most of the course involves hands-on dashboard design. Working in small groups of four, participants work collaboratively to fully design (but not develop) real dashboards of their choosing. Each participant brings a single set of dashboard requirements to the course and, with the assistance of fellow participants and myself, proceeds step-by-step through the design process from initial sketching to polished wireframes, which they present to the class on the last day for a final critique. Participants leave the course with a well-designed, real-world dashboard in hand and the skills to produce others, again and again.
This course is limited to 20 participants and is only for those who are already skilled dashboard designers.
If you have any questions about private training by Nick Desbarats or about either of the two advanced courses, you’re welcome to post them here in this blog.
August 18th, 2014
The first time that all but a few of us heard the term “Big Data,” we heard it in the context of a marketing campaign by information technology vendors to promote their products and services. It is this marketing campaign that has made the term popular, leading eventually to the household name that it is today. Despite its popularity, it remains a term seeking a definitive meaning. There are as many definitions of Big Data as there are individuals and organizations that would like to benefit from the belief that it exists. My objective in this brief blog article is to ask, “Does Big Data signify anything that is actually happening, and if so, what is it?”
Long before the term came into common usage around the year 2010, it began to pop up here and there in the late 1990s. It first appeared in the context of data visualization in 1997 at the IEEE 8th Conference on Visualization in a paper by Michael Cox and David Ellsworth titled “Application-controlled demand paging for out-of-core visualization.” The article begins as follows:
Visualization provides an interesting challenge for computer systems: data sets are generally quite large, taxing the capacities of main memory, local disk, and even remote disk. We call this the problem of big data. When data sets do not fit in main memory (in core), or when they do not fit even on local disk, the most common solution is to acquire more resources.
Two years later at the 1999 IEEE Conference on Visualization a panel convened titled “Automation or interaction: what’s best for big data?”
In February of 2001, Doug Laney, at the time an analyst with the Meta Group, now with Gartner, published a research note titled “3D Data Management: Controlling Data Volume, Velocity, and Variety.” The term Big Data did not appear in the note, but a decade later, the “3Vs” of volume, velocity, and variety became the most common attributes that are used to define Big Data.
The first time that I ran across the term personally was in a 2005 email from the software company Insightful, the maker of S+, a derivative of the statistical analysis language R, in the title of a course “Working with Big Data.”
By 2008 the term had become used enough in scientific circles to warrant a special issue of Nature magazine. It still didn’t begin to be used more broadly until February, 2010 when Kenneth Cukier wrote a special report for The Economist titled “Data, Data Everywhere” in which he said:
…the world contains an unimaginably vast amount of digital information which is getting ever vaster ever more rapidly… The effect is being felt everywhere, from business to science, from governments to the arts. Scientists and computer engineers have coined a new term for the phenomenon: “big data.”
It was around this time that the term was snatched from the world of academia to become the most successful information technology marketing campaign of the current decade. (I found most of the historical references to the term Big Data in the Forbes June 6, 2012 blog post by Gil Press titled “A Very Short History of Big Data.”)
Because Big Data has no commonly accepted definition, discussions about it are rarely meaningful or useful. Not once have I encountered a definition of Big Data that actually identifies anything that is new about data or its use. Doug Laney’s 3Vs, which describe exponential increases in data volume, velocity, and variety, have been happening since the advent of the computer many years ago. You might think that technological milestones such as the advent of the personal computer, Internet, or social networking have created exponential increases in data, but they have merely sustained exponential increases that were already happening. Had it not been for these technological advances, increases in data would have ceased to be exponential. Recently, definitions have emphasized the notion that Big Data is data that cannot be processed by conventional technologies. What constitutes conventional vs. unconventional technologies? My most recent encounter with this was the claim that Big Data is that which cannot be processed by a desktop computer. Based on this rather silly definition, Big Data has always existed, because personal computers have never been capable of processing many of the datasets that organizations collect.
So, if Big Data hasn’t been defined in an agreed-upon manner and if none of the existing definitions identify anything about data or its use that is actually new, does the term really describe anything? I’ve thought about this a great deal and I’ve concluded that it describes one thing only that has actually occurred in recent years:
Big Data is a rapid increase in public awareness that data is a valuable resource for discovering useful and sometimes potentially harmful knowledge.
Even if Big Data is this and nothing more, you might think that I’d be grateful for it. I make my living helping people understand and communicate information derived from data, so Big Data has produced a greater appreciation for my work. Here’s the rub: Big Data, as a term with no clear definition, which serves as a marketing campaign for technology vendors, encourages people to put their faith in technologies without first developing the skills that are needed to use those technologies. As a result, organizations waste their money and time chasing the latest so-called Big Data technologies—some useful, some not—to no effect because technologies can only augment the analytical abilities of humans; they cannot make up for our lack of skills or entirely replace our skills. Data is indeed a valuable resource, but only if we develop the skills to make sense of it and find within the vast and exponentially growing noise those relatively few signals that actually matter. Big Data doesn’t do this, people do—people who have taken the time to learn.
July 8th, 2014
Even though we all claim to value education, teaching and learning is rarely done well. To achieve good outcomes, teachers and students must understand how the brain learns. Unfortunately, few teachers have more than a passing acquaintance with the science of learning. Many of the strongly held and frequently espoused notions about learning practices (e.g., good study habits), which seem intuitive, are dead wrong. Scientific investigation into the learning brain has revealed a great deal, especially in recent years, but the findings seldom reach the teachers and learners who would benefit from them. Peter Brown, Henry L. Roediger III, and Mark A. McDaniel have responded to this problem in the form of a wonderful new book titled Make It Stick: The Science of Successful Learning (2014).
Don’t confuse this with another fine book titled Made to Stick (2007) by brothers Chip and Dan Heath, which teaches how to get messages across in clear and compelling ways. Make It Stick presents in accessible terms the latest research findings regarding learning, both for people who want optimize their own learning efforts and for teachers who want to create successful learning experiences for their students.
By learning, the authors mean “acquiring knowledge and skills and having them readily available from memory so you can make sense of future problems and opportunities.” They’re not talking about simple recall. Learning involves memory, but extends beyond mere recall into the realm of application. Real learning is “effortful.” For example, “When you’re asked to struggle with solving a problem before being shown how to solve it, the subsequent solution is better learned and more durably remembered.” Effort alone isn’t enough, however. It has to be the right effort.
To apply knowledge and skills to new problems and opportunities when they arise, we must possess more than procedural familiarity; we must have a conceptual understanding that is generalizable. “People who learn to extract the key ideas from new material and organize them into a mental model and connect that model to prior knowledge show an advantage in learning complex mastery.” We can all become better learners by developing better learning practices. One such practice is frequent testing. Whether you’re studying on your own or in a structured learning setting, frequently testing your understanding and ability to apply what you’re learning strengthens it and provides the feedback that you need to focus your efforts where they’re most needed.
Many popular beliefs about learning, such as the benefits of cramming (a.k.a., massed practice) and rereading material over and over, are flawed. The ability to perform well on a multiple-choice test soon after cramming or rereading material is short-lived. Spaced practice, interleaved with other material, results in better learning than non-stop focus on a single topic or skill. Some of the best learning practices are counter-intuitive and don’t necessarily feel like progress during the learning process itself, even though they dramatically outperform other practices that feel more productive. Some beliefs about learning that have garnered attention in recent years are downright wrong. One that I’ve encountered frequently in my own work is the notion that people learn best when they engage in the learning style that they prefer.
The popular notion that you learn better when you receive instruction in a form consistent with your preferred learning style, for example as an auditory or visual learner, is not supported by the empirical research. People do have multiple forms of intelligence to bring to bear on learning, and you learn better when you “go wide,” drawing on all of your aptitudes and resourcefulness, than when you limit instruction or experience to the style you find most amenable.
Our brains are designed to think in several modes (e.g., verbally, numerically, and visually), which we should shift between fluidly, as needed, depending on the nature of the material and the perspective from which we wish to consider it.
Another popular but misguided notion is called student-directed learning. “This theory holds that students know best what they need to study to master a subject, and what pace and methods work best for them.” While it’s true that students should take more responsibility for their own learning, “most students will learn academics better under an instructor who knows where improvement is needed and structures the practice required to achieve it.”
Fundamentally, the purpose for which we pursue the acquisition of information and skills has a significant effect on learning. There is a huge difference between focusing on performance versus focusing on learning.
In the first case, you’re working to validate your ability. In the second, you’re working to acquire new knowledge and skills. People with performance goals unconsciously limit their potential. If your focus is on validating or showing off your ability, you pick challenges you are confident you can meet…But if your goal is to increase your ability, you pick ever-increasing challenges, and you interpret setbacks as useful information that helps you sharpen your focus, get more creative, and work harder.
I could go on, but I won’t, because I merely want to whet your appetite for more. This is an excellent book and one that is desperately needed. As the authors say, “No matter what you may set your sights on doing or becoming, if you want to be a contender, it’s mastering the ability to learn that will get you in the game and keep you there.”
Although I was already familiar with much of the material in this book, because of extensive reading about learning theory, 40 years of reflective teaching experience, and a lifelong love of learning, a great deal was new to me. Enough, in fact, that I will soon be redesigning my table and graph design course, Show Me the Numbers, to last two days rather than one so I can add frequent tests, additional discussions, and many more group exercises to guarantee that my students leave with a stronger foundation to build on. I’ve been teaching the concepts well, but not fully providing the learning experience that will make those concepts stick.