Thanks for taking the time to read my thoughts about Visual Business
Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions
that are either too urgent to wait for a full-blown article or too
limited in length, scope, or development to require the larger venue.
For a selection of articles, white papers, and books, please visit
October 1st, 2014
During the last few months, I’ve been hit with a double barrel of distraction. Life sometimes gets in the way, but the path forward is opening up and I’m anxious to once again give my work the attention it deserves. Even though I haven’t been very active in this blog recently, things have been happening. My new book Signal is only a few days away from being ready for editing, but just as exciting are the two new advanced data visualization courses that I’ll be introducing in January and the addition of Nick Desbarats (pronounced like the name Deborah) as an educator and design consultant to the Perceptual Edge team.
Nick approached me a few months ago with a passion for data visualization, combined with a keen intellect, a fine pedigree of skills and experiences in decision support, and an insatiable curiosity. He’s also an accomplished business leader, having founded or co-founded three companies.
Even though I had long ago decided that I would never hire someone to teach my courses, Nick eventually opened my mind to the possibility. Nick’s addition to the team will solve a particular problem that many organizations have faced during the last few years since I began teaching public workshops: he will accept invitations to teach my courses privately. In recent years I’ve turned down many requests for private courses, preferring to use my limited time for teaching in public workshops where I can reach the broadest audience possible in the most efficient way. Nick is now available to teach Show Me the Numbers privately, and will soon offer my Information Dashboard Design and Now You See It courses as well. If your organization has at least 30 people that you’d like to have trained, email us at info@PerceptualEdge.com or call us at (510) 558-7400 to discuss the possibility.
As some of you have already noticed, we’ve opened registration for two new advanced courses on our Workshops page. On January 13 and 14, 2015 I’ll be teaching the two-day course Signal, which covers the content of my new book, in Berkeley, California. I’ll also be teaching it in Utrecht in the Netherlands on May 18 and 19, 2015 and in Sydney, Australia on November 10 and 12, 2015. Here’s an introduction to the content of Signal.
Many people assume that, if they know how to use data analysis software, this means that they are skilled data analysts. Knowledge of data analysis tools and possession of data analysis skills, however, are not the same. Even expert users of analytical tools frequently have little or no knowledge of fundamental yet powerful data analysis techniques.
Do you have the nagging sense that signals in your data—the things that matter most—might be slipping by unnoticed? Do you ever wonder if you’re wasting time tracking the wrong metrics or tracking the right metrics in the wrong ways? The data analysis that takes place in most organizations produces only a small fraction of its potential for useful insights. This is because most of the people who do this work have never been trained in data exploration and analysis beyond the basics, if at all. When presented with a new data set, do you know how to get the lay of the land—the context that’s necessary for analytical insights? Do you know how to separate signals from the noise?
At any given point in time, only a small portion of any organization’s data is useful for decision making. The rest is noise. Increases in data volume, velocity, and variety are actually more a problem than a benefit unless you know how to find and decipher signals buried in that growing haystack.
In this advanced data exploration and analysis course, I’ll take participants beyond the basic skills that are taught in my books Show Me the Numbers and Now You See It to the next level of statistical and data visualization skills that are required for signal detection. Only the signals matter.
The other new advanced course is Advanced Dashboard Design. I’ll be teaching this three-day course for the first time on January 27-29, 2015 in Berkeley, California, and later in London on March 4-6, 2015 (now open for registration) and in Sydney, Australia on November 10-12, 2015. This course is limited to 20 participants. Here’s a description:
Dashboards have now been a popular form of information display for over a decade, but relatively few of them live up to their potential. In my book Information Dashboard Design: Displaying Data for At-a-Glance Monitoring, Second Edition (2013), I introduce best practices for dashboard design that can be learned with relative ease. Competence, however, take more than reading a book; it takes experience that benefits from expert oversight and advice. This three-day course is specifically designed for dashboard designers who have already learned the basics and now want to extend and deepen their skills.
A series of short lectures and discussions cover advanced dashboard design topics, but most of the course involves hands-on dashboard design. Working in small groups of four, participants work collaboratively to fully design (but not develop) real dashboards of their choosing. Each participant brings a single set of dashboard requirements to the course and, with the assistance of fellow participants and myself, proceeds step-by-step through the design process from initial sketching to polished wireframes, which they present to the class on the last day for a final critique. Participants leave the course with a well-designed, real-world dashboard in hand and the skills to produce others, again and again.
This course is limited to 20 participants and is only for those who are already skilled dashboard designers.
If you have any questions about private training by Nick Desbarats or about either of the two advanced courses, you’re welcome to post them here in this blog.
August 18th, 2014
The first time that all but a few of us heard the term “Big Data,” we heard it in the context of a marketing campaign by information technology vendors to promote their products and services. It is this marketing campaign that has made the term popular, leading eventually to the household name that it is today. Despite its popularity, it remains a term seeking a definitive meaning. There are as many definitions of Big Data as there are individuals and organizations that would like to benefit from the belief that it exists. My objective in this brief blog article is to ask, “Does Big Data signify anything that is actually happening, and if so, what is it?”
Long before the term came into common usage around the year 2010, it began to pop up here and there in the late 1990s. It first appeared in the context of data visualization in 1997 at the IEEE 8th Conference on Visualization in a paper by Michael Cox and David Ellsworth titled “Application-controlled demand paging for out-of-core visualization.” The article begins as follows:
Visualization provides an interesting challenge for computer systems: data sets are generally quite large, taxing the capacities of main memory, local disk, and even remote disk. We call this the problem of big data. When data sets do not fit in main memory (in core), or when they do not fit even on local disk, the most common solution is to acquire more resources.
Two years later at the 1999 IEEE Conference on Visualization a panel convened titled “Automation or interaction: what’s best for big data?”
In February of 2001, Doug Laney, at the time an analyst with the Meta Group, now with Gartner, published a research note titled “3D Data Management: Controlling Data Volume, Velocity, and Variety.” The term Big Data did not appear in the note, but a decade later, the “3Vs” of volume, velocity, and variety became the most common attributes that are used to define Big Data.
The first time that I ran across the term personally was in a 2005 email from the software company Insightful, the maker of S+, a derivative of the statistical analysis language R, in the title of a course “Working with Big Data.”
By 2008 the term had become used enough in scientific circles to warrant a special issue of Nature magazine. It still didn’t begin to be used more broadly until February, 2010 when Kenneth Cukier wrote a special report for The Economist titled “Data, Data Everywhere” in which he said:
…the world contains an unimaginably vast amount of digital information which is getting ever vaster ever more rapidly… The effect is being felt everywhere, from business to science, from governments to the arts. Scientists and computer engineers have coined a new term for the phenomenon: “big data.”
It was around this time that the term was snatched from the world of academia to become the most successful information technology marketing campaign of the current decade. (I found most of the historical references to the term Big Data in the Forbes June 6, 2012 blog post by Gil Press titled “A Very Short History of Big Data.”)
Because Big Data has no commonly accepted definition, discussions about it are rarely meaningful or useful. Not once have I encountered a definition of Big Data that actually identifies anything that is new about data or its use. Doug Laney’s 3Vs, which describe exponential increases in data volume, velocity, and variety, have been happening since the advent of the computer many years ago. You might think that technological milestones such as the advent of the personal computer, Internet, or social networking have created exponential increases in data, but they have merely sustained exponential increases that were already happening. Had it not been for these technological advances, increases in data would have ceased to be exponential. Recently, definitions have emphasized the notion that Big Data is data that cannot be processed by conventional technologies. What constitutes conventional vs. unconventional technologies? My most recent encounter with this was the claim that Big Data is that which cannot be processed by a desktop computer. Based on this rather silly definition, Big Data has always existed, because personal computers have never been capable of processing many of the datasets that organizations collect.
So, if Big Data hasn’t been defined in an agreed-upon manner and if none of the existing definitions identify anything about data or its use that is actually new, does the term really describe anything? I’ve thought about this a great deal and I’ve concluded that it describes one thing only that has actually occurred in recent years:
Big Data is a rapid increase in public awareness that data is a valuable resource for discovering useful and sometimes potentially harmful knowledge.
Even if Big Data is this and nothing more, you might think that I’d be grateful for it. I make my living helping people understand and communicate information derived from data, so Big Data has produced a greater appreciation for my work. Here’s the rub: Big Data, as a term with no clear definition, which serves as a marketing campaign for technology vendors, encourages people to put their faith in technologies without first developing the skills that are needed to use those technologies. As a result, organizations waste their money and time chasing the latest so-called Big Data technologies—some useful, some not—to no effect because technologies can only augment the analytical abilities of humans; they cannot make up for our lack of skills or entirely replace our skills. Data is indeed a valuable resource, but only if we develop the skills to make sense of it and find within the vast and exponentially growing noise those relatively few signals that actually matter. Big Data doesn’t do this, people do—people who have taken the time to learn.
July 8th, 2014
Even though we all claim to value education, teaching and learning is rarely done well. To achieve good outcomes, teachers and students must understand how the brain learns. Unfortunately, few teachers have more than a passing acquaintance with the science of learning. Many of the strongly held and frequently espoused notions about learning practices (e.g., good study habits), which seem intuitive, are dead wrong. Scientific investigation into the learning brain has revealed a great deal, especially in recent years, but the findings seldom reach the teachers and learners who would benefit from them. Peter Brown, Henry L. Roediger III, and Mark A. McDaniel have responded to this problem in the form of a wonderful new book titled Make It Stick: The Science of Successful Learning (2014).
Don’t confuse this with another fine book titled Made to Stick (2007) by brothers Chip and Dan Heath, which teaches how to get messages across in clear and compelling ways. Make It Stick presents in accessible terms the latest research findings regarding learning, both for people who want optimize their own learning efforts and for teachers who want to create successful learning experiences for their students.
By learning, the authors mean “acquiring knowledge and skills and having them readily available from memory so you can make sense of future problems and opportunities.” They’re not talking about simple recall. Learning involves memory, but extends beyond mere recall into the realm of application. Real learning is “effortful.” For example, “When you’re asked to struggle with solving a problem before being shown how to solve it, the subsequent solution is better learned and more durably remembered.” Effort alone isn’t enough, however. It has to be the right effort.
To apply knowledge and skills to new problems and opportunities when they arise, we must possess more than procedural familiarity; we must have a conceptual understanding that is generalizable. “People who learn to extract the key ideas from new material and organize them into a mental model and connect that model to prior knowledge show an advantage in learning complex mastery.” We can all become better learners by developing better learning practices. One such practice is frequent testing. Whether you’re studying on your own or in a structured learning setting, frequently testing your understanding and ability to apply what you’re learning strengthens it and provides the feedback that you need to focus your efforts where they’re most needed.
Many popular beliefs about learning, such as the benefits of cramming (a.k.a., massed practice) and rereading material over and over, are flawed. The ability to perform well on a multiple-choice test soon after cramming or rereading material is short-lived. Spaced practice, interleaved with other material, results in better learning than non-stop focus on a single topic or skill. Some of the best learning practices are counter-intuitive and don’t necessarily feel like progress during the learning process itself, even though they dramatically outperform other practices that feel more productive. Some beliefs about learning that have garnered attention in recent years are downright wrong. One that I’ve encountered frequently in my own work is the notion that people learn best when they engage in the learning style that they prefer.
The popular notion that you learn better when you receive instruction in a form consistent with your preferred learning style, for example as an auditory or visual learner, is not supported by the empirical research. People do have multiple forms of intelligence to bring to bear on learning, and you learn better when you “go wide,” drawing on all of your aptitudes and resourcefulness, than when you limit instruction or experience to the style you find most amenable.
Our brains are designed to think in several modes (e.g., verbally, numerically, and visually), which we should shift between fluidly, as needed, depending on the nature of the material and the perspective from which we wish to consider it.
Another popular but misguided notion is called student-directed learning. “This theory holds that students know best what they need to study to master a subject, and what pace and methods work best for them.” While it’s true that students should take more responsibility for their own learning, “most students will learn academics better under an instructor who knows where improvement is needed and structures the practice required to achieve it.”
Fundamentally, the purpose for which we pursue the acquisition of information and skills has a significant effect on learning. There is a huge difference between focusing on performance versus focusing on learning.
In the first case, you’re working to validate your ability. In the second, you’re working to acquire new knowledge and skills. People with performance goals unconsciously limit their potential. If your focus is on validating or showing off your ability, you pick challenges you are confident you can meet…But if your goal is to increase your ability, you pick ever-increasing challenges, and you interpret setbacks as useful information that helps you sharpen your focus, get more creative, and work harder.
I could go on, but I won’t, because I merely want to whet your appetite for more. This is an excellent book and one that is desperately needed. As the authors say, “No matter what you may set your sights on doing or becoming, if you want to be a contender, it’s mastering the ability to learn that will get you in the game and keep you there.”
Although I was already familiar with much of the material in this book, because of extensive reading about learning theory, 40 years of reflective teaching experience, and a lifelong love of learning, a great deal was new to me. Enough, in fact, that I will soon be redesigning my table and graph design course, Show Me the Numbers, to last two days rather than one so I can add frequent tests, additional discussions, and many more group exercises to guarantee that my students leave with a stronger foundation to build on. I’ve been teaching the concepts well, but not fully providing the learning experience that will make those concepts stick.
May 12th, 2014
I am writing these words in Amsterdam. Yesterday, when I arrived here, I visited the Stedelijk Museum of contemporary art and design. The featured exhibition was the work of the Dutch industrial designer Marcel Wanders. This exhibit was timely, for I’m currently reading a book titled Design This Day by Walter Dorwin Teague, one of the founders of industrial design. The juxtaposition between Wanders’ current work and Teague’s formative concept of design struck me as extreme. Wanders is the antithesis of Teague. The exhibition of Wander’s work featured this huge photograph above the entrance:
Wanders work exhibits conscious, unapologetic self-expression—”Look at me!” One of the quotes writ large on the museum’s wall expressed Wanders’ belief that a designer’s work should exhibit his personal signature. I disagree, as does Teague.
When speaking of the rightness of a design, Teague declares that all aspects “should derive their sanction from something more necessary than a designer’s fancy.” Design strives to solve a problem, to serve human needs, not to express the personality of the designer.
Wanders’ notion of design is quite different.
It is our responsibility to be magicians, to be jesters, to be alchemists, to create hope where there is only illusion, to create reality where there are only dreams.
He shuns the formative principle of industrial design that “form follows function.” His aspirations are those of an artist, not a designer. This perspective is reflected in his work.
No, this is not a toy, it is Wanders’ full size, running “holiday car,” its exterior covered with colored stones.
The designer’s approach should be one of interaction, not imposition: interaction between human needs, the tools, techniques, and materials of construction, the environment, and the designer’s skill and imagination. As designers, we use the best materials, tools, and techniques available to solve real problems in the context of our environment as well as possible. We are directed by human needs and the problems that must be solved to fulfill them, not a desire for self-expression. We are restricted objectively by our tools and materials and their impact on the world, not subjectively by the expanse of our egos. The product of our efforts should show no visible sign of ourselves, though it is born of our imagination. Perhaps this is a fundamental difference between art and design: the former an act of self-expression, often beautiful; the latter an act of integration and resolution, no less beautiful, but assessed differently. As designers, we speak in silence, but our voices, though anonymous whispers, are no less heard. Silently, we change the world.
May 6th, 2014
It’s often useful to take a fresh look at things through the eyes of an outsider. My friend Leanne recently provided me with an outsider’s perspective after reading a blog article of mine regarding Big Data. In it I referred to the three Vs—volume, velocity, and variety—as a common theme of Big Data definitions, which struck Leanne as misapplied. Being trained in health care and, perhaps more importantly, being a woman, Leanne pointed out that the three Vs don’t seem to offer any obvious advantages to data, but they’re highly desirable when applied to the Big O. What’s the Big O? Leanne was referring to the “oh, oh, oh, my God” Big O more commonly known as the female ORGASM. When it comes to the rock-my-world experience of the Big O:
- Volume is desirable—the more the better;
- Velocity is desirable—reaching terminal velocity quickly with little effort is hard to beat; and
- Variety is desirable—getting there through varied and novel means is a glorious adventure.
The three Vs are a perfect fit for the Big O, but not for data. More data coming at us faster from an ever-growing variety of sources offers few advantages and often distracts from the ultimate goal. Leanne doesn’t understand why data geeks (her words, not mine) are spending so much time arguing about terminology and technology instead of focusing on content—what data has to say—and putting that content to good use. I couldn’t agree more.