Data Visualization Lite

In the world of data visualization, we are progressing at a snail’s pace. This is not the encouraging message that vendors and many “experts” are promoting, but it’s true. In the year 2004, I wrote the first edition of Show Me the Numbers in response to a clear and pressing need. At the time no book existed that pulled together the principles and best practices of quantitative data presentation and made them accessible to the masses of mostly self-trained people who work with numbers. I was originally inspired by the work of Edward Tufte, but realized that his work, exceptional though it was, awed us with a vision of what could be done without actually showing us how to do it. After studying all of the data visualization resources that I could find at the time, I pulled together the best of each, combined it with my own experience, gave it a simple and logical structure, and expressed it comprehensibly in accessible and practical terms. At that time, data visualization was not the hot topic that it is today. Since then, as the topic has ignited the imagination of people in the workplace and become a dominant force on the web, several books have been written about quantitative data presentation. I find it disappointing, however, that almost nothing new has been offered. With few exceptions, most of the books that have been written about data visualization, excluding books about particular tools or specific applications (e.g., dashboard design), qualify as data visualization lite.

Those books written since 2004 that aren’t filled with errors and poor guidance, with few exceptions, merely repeat what has been written previously. Saying the same old thing in a new voice is not helpful unless that new voice reaches an audience that hasn’t already been addressed or expresses the content in a way that is more informative. Most of the new voices are addressing data visualization superficially, appealing to an audience that desires skill without effort. As such, they dangle a false promise before the eager eyes of lazy readers. Data visualization lite is not a viable solution to the world’s need for clear and accurate information. Instead, it is a compromise tailored to appeal to short attention spans and a desire for immediate expertise, which isn’t expertise at all.

In a world that longs for self-service business intelligence, naively placing data sensemaking and communication in the same category as pumping gas, we need fresh voices to proclaim the unpopular truth that these skills can only be learned through thoughtful training and prolonged practice. It is indeed true that many people in our organizations can learn to analyze and present quantitative data effectively, but not without great effort. We don’t need voices to reflect the spirit of our time; we need voices to challenge that spirit—voices of transformation. Demand depth. Demand lessons born of true expertise. Demand evidence.

Where are these fresh and courageous voices? Who will light the way forward? There are only a few who are expressing new content, addressing new audiences, or expressing old content in new and useful ways. Until we demand more thoughtful and transformative work, the future of data visualization will be dim.

Take care,


48 Comments on “Data Visualization Lite”

By Neil W. Schneider. June 13th, 2016 at 6:54 pm

Dear Mr. Few,

Thanks for your efforts on pioneering this cause. In our office we hand new hires two things on their first day; The little SAS book and Show Me the Numbers. Sadly, I suspect less than 20% read it. Too often people say the are good at data presentation and dashboard design, but then they hand you something full of gas gauges and exploded 3-D pie charts.

At the end of your post you mentioned are only a few who are addressing new content. Did you have anyone particular in mind? I am always interested in learning more, but like you said there is so much “lite” visualization these days it is hard to cut through it all.

Neil W. Schneider, FSA

By Jonathon Carrell. June 13th, 2016 at 8:08 pm


I have found what you describe to be more or less a universal truth. All too often people falsely believe because they can navigate a tool’s UI that they are proficient and effective in what ever task the tool was designed to support (in this case, data visualization).

You can go so far as to obtain a certification for a given tool (i.e. software) and still be lacking in the skills needed to apply those tools effectively.

To further Stephen’s point, it seems with increasing frequency that absent truly fresh ideas, many voices within the community have taken to revisiting practices that have already shown themselves lacking. Perhaps it’s a case of too many people wanting to be heard while have too little (if anything) to say.

By Alberto Cairo. June 14th, 2016 at 3:54 am

I’ll have to disagree here. It’s true that all recent (and not so recent: Tufte borrowed A LOT from Brinton, Tukey, Bertin, etc.) books repeat teachings from previous researchers and practitioners. I know that, I’ve written two of those recent books! But I’ve found something interesting and novel in most of the many visualization and infographics books that have been published in the past five or six years. I am not talking about books in their entirety book, for sure –it is hard to be sublime without interruption, no matter what Baudelaire said,– but many include bits of insight here and there, and that is valuable.

Perhaps I’m too optimistic, but I think that it is a sign that a field is reaching its maturity when progress doesn’t consist of the long strides of brave and solitary explorers opening new paths in the wilderness, but of the little steps of people who hold hands with those who came before them, forming a long chain of knowledge.

By Steven Frazier. June 14th, 2016 at 5:11 am

Feel free to disagree and challenge. My frustration is that I still see to many representations of data without providing any real information – “surface skimming”. There’s no story. There’s nothing in the title that makes a statement to the viewer of what the information is. We place the effort on the viewer to determine what the visualization communicates. I hear managers say that they cannot see progress from our efforts through these visualizations and it’s primarily because they looking at representations of raw data. In some cases not only are they misinterpreting the visualization, but they are coming to wrong conclusions. In some cases there’s just too much of this “stuff”– again only representations of data. It as if we’re not purposeful when we create this flora of visualizations and the results are not meaningful. When you read Tuft, Cleveland and Few you get a sense that we’re trying to make connections between our actions and outcomes, between triggers and resulting events. Without that feedback – we will struggle to improve. Forget about the tool and first focus on what you want your audience to understand.

By Alberto Cairo. June 14th, 2016 at 5:18 am

—–My frustration is that I still see to many representations of data without providing any real information – “surface skimming”. There’s no story——-

We agree, but these problems that you mention are addressed in many recent books, including mine, Steve’s “Signal“, Nussbaumer’s, Camões’s, Kirk’s, Evergreen’s, etc., so it’s not a challenge with the books, but a problem with people not reading them.

By Robert Monfera. June 14th, 2016 at 6:20 am


while I refrain from chiming in on the main topic, lacking sufficient depth and research in this, I often have this feeling, a recurring wave of personal conviction, that we’re still at an early stage of data visualization. Some random reasons:

1. Computer games: While in dataviz, we’ve been having fun making axes, and if we’re technically sophisticated, we even make them adapt to the data or even respond to it in real time (tweening with new data or filtering), in games, all these are a given, at high performance, and deep scenegraph hierarchies describing the game world. The same goes with styling. For example, consider Nadieh Bremer’s and others’ awesome posts on motion blur, point merging, or depth blurring on scatterplots. By contrast, the techniques for these have been around for ages in SVG – even accessible from D3 – and many outside the dataviz community have been using these with SVG, not to mention indie games and AAA games. Also, there’s so much to learn from games, in terms of control, exploration, immediate feedback, immersion, emotional connection, in comparison, incredibly advanced techniques, engagement and learning. Since gaming, education, news and data visualization overlap even now, and on fundamental grounds the overlap could justifiably be vastly greater, I can’t help but be convinced that there’ll be immense advances just due to learning from games. Techniques that were trivial to 80’s demoscene coders will continue to crop up and surprise over the next decade, yet the real promise is not technical catch-up as much as establishing the same strong connection between dataviz maker and user as currently seen in gaming – most importantly, for giving people a vastly deeper, connected and integrated immersion into the world of data than what the current, information poor, sterile barcharts can do.

2. Data and projection spaces: Besides gaming, there are adjacent areas such as mapping and scientific data visualization that deal with vast amounts of data, continuous streaming in/out of information as needed, continuous, adaptive resampling, and a way of helping make sense of vast amounts of data, by using our cognitive faculties (currently, mostly visual and spatiotemporal processing) as fully as possible. I suspect that current data visualization merely scratches the surface and current user engagement with dataviz is superficial, fleeting, often immemorable and isolated in comparison to what could, and I think will be achieved.

3. Interaction. There’s current debate about scrollytelling and user engagement. We read sound advice for using interactivity in moderation and after deliberation, still, it is true that most discussion and publications on dataviz either assume a paper presentation, or presentation on other media (screens) in a way that closely resemble paper. However with current computer screens we evolve toward “live paper”, and even though there are interactive experiences designed for computers, it’s rarely the focus of current datavis research and guidance. A great writing on this is Magic Ink from Bret Victor. In other words, I think that the current preference for scrollytelling merely highlights our present lack of experience and means for providing engaging, interactive content. Clearly, interaction is harmful if the story is single-dimensional enough to not warrant it, or if interactivity can be replaced by inference as described in Magic Ink. There’s some implicit linearity in most current storytelling, though I believe it’s linear mostly because we haven’t yet learnt to weave a rich, non-linear, navigable, open world of knowledge, which brings us to the next point.

4. Sea of knowledge. I used to be annoyed by economic news in the radio, where the reporter announced my country’s latest macroeconomic indicators, which were interesting, however they had not put it in context with other countries in the region, or longitudinally. So there that was, this sliver of knowledge, floating free of context in the air. Fundamentally, we still have this experience, no matter the medium. A thoughtful report on economic matters may of course include regional comparisons, offering e.g. a bar chart or small multiples chart per country in the region or continent. However it’s solely at the discretion of the author. If he thought of that, good. If not, then bummer. However I think that a lot of readers have a *distribution* of interests, and shooting for the average may be satisfying to few. As an example, economic databases include mountains of multidimensional data. However I can’t currently take a great NYT or FT article and just somehow ‘loop in’ data that the maker didn’t already include deliberately. It’s like a cage – the information is out there somewhere, and here we have a great presentation format, and we’re deeply interested in it *right now* but we can’t get the answers. The reader better becomes versed in the arcane knowledge of where the public databases reside, they can laboriously duplicate the data cleansing and consolidation exercise, probably leading to different results, and yes it’ll be necessary to learn R, python or at least a large data visualization tool. Currently we’re really far from freely navigating from a FT article on British demography to swapping data in the same article with analogous French or EU data, while the article and visualizations morph accordingly, transitioning toward some other article gradually which happened to be created by Washington Post, by OurWorldInData or by a research group at the World Bank. Why should we assume that dataviz stays roughly as it is, when it’s utterly deficient in ways that have conceivable remedy? Why expect that dataviz cannot somehow open up to immersion and free navigation?

5. Responsive dataviz. Current data visualization is one-size-fit-all. In the future, I expect that a casual, somewhat disinterested reader gets abbreviated, simplified sports analytics; if the medium detects increasing interest, then the dataviz morphs into something more detailed, possibly responding to the particular interest of the reader, without the current, rigid, mechanistic and preconceived controls of e.g. an “enlarge” button, a link or a crossfilter. A sparkline thumbnail on a home page may gradually morph into a detailed timeline chart, rich with annotations and contextual information, fading away into a sparkline again if the user broadens scope to comparables, making the sparkline now be a part of a small multiple grid.

6. Technology. I’m focusing on content and engagement, downplaying the technology aspect, but currently there’s no automated, transparent mechanism for very simple things like placing ticks on an axis such that the ticks are not too few or not too many, there’s no crowding or overlap, etc. (there are libraries and packages that have heuristics but it still needs to evolve). Publishing, design, typography are huge areas of pertinent knowledge, yet current tools only capture a sliver of that knowledge. Even doing something trivial has obstacles: SVG is a hierarchical representation for good reasons, yet the leading tools (Ai etc.) don’t support it, there are groups but no group transforms. There’s all kinds of siloing going on. I wrote elsewhere that we D3 users tend to forget D3 sits atop of SVG; we’re happy making hundreds of separate circles or axis ticks for a scatterplot where it could be done with a single DOM element and a marker. We’re struggling with making multicolored curves. There’s all kinds of technological, cultural and societal boundaries across tools and areas, and cross-pollination happens slowly. It is therefore expected that lots of advances will happen simply due to absorption of old ideas and knowledge – not to mention the still ongoing evolution of technology, with new classes of devices (smartphone, VR…) appearing every decade or so. For an inspiration of what it might be a few decades from now, Fast Times at Fairmont High by Vernor Vinge is a good read. But we don’t have to jump ahead decades, we can go back for profound inspiration. Much of what was conceptualized in the sixties is still waiting to be implemented. A great talk on this is, again, Bret Victor’s The Future of Programming, which has a great section that contrasts what Engelbart had in mind for collaboration and what trivial tools like Googe Docs or Skype do; Alan Kay’s talks are interesting too. I often find research papers from the 80s or earlier that are fascinating and talk to some fundamental problem or angle that feels sorely unaddressed by current means.

7. Understanding of ourselves. We’re engaging in data visualization for some purpose, and we’re humans. We know relatively little about how our mind and our society works, but this knowledge is increasing and more research may, and should target the processing, and even the creative process of data visualization. I would expect that data visualization will, or at least should, continuously be shaped by our understanding of ourselves and how we perceive, process and retain information and conclusions; what are the biases that can be misused; how can, in essence, numeracy and other abstract concepts be evolved into something like a new sensory organ.

8. Society. We currently think about dataviz in terms of news, public policy and human development analysis, sports and financial analysis done by newspapers, web sites, financial and business consultants and scientific publications. However data visualization is better perceived as something fitting in the grand scheme of sharing thoughts, conclusions, support and questions; influencing, convincing; teaching and helping grow; aiding research; entertaining; increasing the efficiency and relevance of the human creative process, be it about products, behaviors or policies. I believe that part of the public’s indifference toward important events such as global warming or ongoing wars is partly explained by the disparity between our technological means of impacting the planet and its inhabitants en masse, vs. the means of sharing thought, information, empathy, knowledge and values that would somehow make us behave like we care about the next person, our grandchildren or the other species. It is this aching gap that can and should eventually be filled by some future thing that has roots in a lot of current disciplines, including data visualization, games, simulations, massive online collaborations and opportunity of access by all of humanity. Also, there’s somewhat limited scope for visualization in an agricultural, preindustrial society, but we need to show galaxies and the universe, the Higgs boson and what went into finding it, planetary exploration, or how things like machine learning work.

These have been mere examples of the myriad of angles and directions that are bound to have an impact on data visualization (if not leading to its absorption or other reimagination), let alone combined. These are subjective thoughts and each of us hold different beliefs about the maturity and the various forces acting on a field. Therefore, again, not participating in the main discussion, I consider what we call data visualization something embryonic, fertile, unexplored and pluripotent, therefore anticipate that incremental advances will be accompanied that will, at least in retrospect, be considered fundamental changes.

By Jason Mack. June 14th, 2016 at 7:18 am

“Show me the numbers” was my first and still my favorite data viz book. In fact I have all of Few’s books and tend to by any new book in the space including Cairo’s latest. I don’t necessarily expect each one to do something totally new and groundbreaking, rather they are just a source of examples and ideas for me.

I think to move things forward, IMO we really need more competition and advances in the software space. Tableau was a huge step forward in the mid-2000s, but is still a specialty tool. Other vendors have chased them largely from a superficial perspective. People need tools that make it easy for them to implement best practices.

By kris erickson. June 14th, 2016 at 8:06 am

@Robert Monfera

Robert, in response to points 2 & 3 I’ve had the experience of making a report in Tableau and seeing it utilized on an Epson short throw projector. This projector enables you to touch a point on the board like a mouse click. I have a simple interactive dual-trellis view with actions and filters (that took no more than a day to build really) and two managers are able to collaboratively interact with the data and find real value in the dataviz.

By Stephen Few. June 14th, 2016 at 8:25 am


Had I listed the tiny number of books written during the last decade that extend the practice of data visualization in useful ways, yours would be at the top of that list. Your books have done something that hadn’t been done and desperately needed to be done: they provide guidance for designing effective journalistic infographics. You are a glaring exception, however. Andy Kirk’s book Data Visualization: A Successful Design Process introduced a process for the development of data visualizations, but the audience for that process is tiny. Only entire data visualization systems, such as analytical applications, need to be treated as a major project handled by entire teams through a formal series of steps. Cole Nussbaumer Knafic, Jorge Camoes, and Stephanie Evergreen added only their unique voices, not any content or unique means of expressing it that extends the practice of data visualization in a useful manner. Like you, I try to read every book that’s published in the field and generally find a bit of insight or two in each, but that doesn’t justify the existence of those books, especially when you must wade through page after page of old content, with an ample measure of error and poor guidance along the way, to get to those insights. The authors would have done readers a favor had they written articles to express their fresh insights.

Tufte deserves more credit than you give him. Of the predecessors that you mentioned, Tukey wrote exclusively for statisticians and Bertin wrote a scientific work about visual perception as it applies to data visualization. Only Brinton of the three predecessors in your list wrote a book about data visualization best practices as Tufte did. By the time that Tufte wrote his first book in 1983, however, Brinton’s work was long forgotten. And besides, Tufte extended the field of data visualization in several ways beyond Brinton’s earlier work.

I do believe that you are too optimistic. As a professor and author with many years of work ahead of you, it is in your interest to be optimistic, but your suggestion that the field is “reaching its maturity” is way off, as you surely must realize. You provide a powerful voice in the effort, but you don’t have many peers. Unless this changes, you’ll be swimming in a stagnant pond.

By Robert Monfera. June 14th, 2016 at 8:29 am


it must be an interesting application. Multiple people molding aesthetic parameters or model parameters together, possibly via direct manipulation. is an interesting example for building up something from geometric primitives via manipulation of the glyphs themselves. A modest, earlier example for morphing a dataviz (no direct manipulation ATM) is here, featuring Stephen Few’s bandlines (morphable to Tufte’s sparklines etc): and there’ll be more to come e.g. and the caveat is, the concept of morphing dashboards is a very small and probably controversial aspect and my rough examples just allude to the concept.

By Robert Monfera. June 14th, 2016 at 8:40 am


I wonder if the old/stable content vs. novel content separation could be solved by publishing interactive books, e.g. as paid apps for tablets. It would enable people buy chapters with new content, or get an upgrade of a former group with new, distinguishable content – maybe a lot of prospective readers who don’t buy a book or a new edition would gladly splurge on select part. If a physical book is sold for e.g. $50+postage then, taking into account how much of that actually goes toward the author, a $20 interactive book may be preferable.

Of course the current e-book format is insufficient, as it arbitrarily uses fonts, font sizes, aspect ratios and isn’t meant for interactive content, so these would need to be implemented as mobile apps.

Also, maybe books on interactive media should do dogfooding, i.e. present concepts, navigation and exploration with the book itself such that it’s a good example on its own right for what the book is about.

By Stephen Few. June 14th, 2016 at 9:01 am


Electronic booklets or articles could serve as a useful way to publish short works on data visualization, except that, as you suggest, e-book platforms are currently lacking. I reviewed all of the popular e-book platforms, hoping to find one that would allow me to create electronic versions of my books that would provide the quality reading experience on which I insist, but I eventually concluded that none of the existing platforms are up to the task.

Even printed books fail, however, if the publisher insists on cutting costs. For example, Alberto Cairo’s important new book “The Truthful Art” contains a large number of figures that cannot be read without a magnifying glass. To display infographics properly, a book of much larger dimensions is required, which is why my books and Tufte’s are large. We self published so that we could control all aspects of our books’ designs, including the quality of printing and the size of the pages. A book about visual design that cannot be read because the figures are too small is in conflict with the very principles that it advocates. This problem can be seen in most of the books about data visualization that have been printed in the last decade or so. At least they’re printed in color, however. When the first edition of Naomi Robbins’ book was published, Wiley Press refused to print it in color and often printed it using high-speed laser printers, which severely undermined the quality of an otherwise excellent book.

By Stephen Few. June 14th, 2016 at 9:05 am


It is definitely true that “data visualization lite” is not only exhibited in written content but also in the tools that are available. No data visualization tool comes close to providing the functionality and simple interface that could exist if a vendor were willing to focus on quality rather than maximizing the next quarter’s sales.

By Robert Monfera. June 14th, 2016 at 9:14 am


e-books lacking sorely means yet another area that’s ripe for disruptive changes. Collation, indexing, lateral navigation in a dense graph of content rather than in a serialized tree structure with a static index, adapt things in size, external links, interaction among readers (highlight, discuss), reflow for presentation, restructure by theme, demonstrate interactive features, multimedia, errata, new editions, feedback, maybe glean anonymized stats on popular or hard to understand parts of the text etc etc – it’s a question of time and willingness by someone to do it.

By Stephen Few. June 14th, 2016 at 9:22 am


What you envision for e-books is indeed disruptive in nature and greatly needed, but I’d be happy for now with a few simple improvements. For example, when reading one of my books, I want a figure to always be visible when text that refers to it is on the screen. I want a simple means of making notes that remain attached to content, much as I make notes in a printed book. And finally, I want an e-book platform that works across all popular reading devices. These features would be a good start.

By Bilal. June 14th, 2016 at 9:26 am

I am surprised Stephen did not mention Tamara Munzner’s book “Visualization Analysis and Design”
The book is full of guidelines and thoughtful analysis, e.g., “Get It Right in Black and White” or “Function First, Form Next” and offers a new perspective compared with “Show Me The Numbers”

By Robert Monfera. June 14th, 2016 at 9:52 am


I liked the climate change e-book which has some chartjunk but rather advanced for its time, even now its occasional crossfiltering and morphing visualizations feel fresh. But something like this is custom and would be an overkill for what you need.

Currently, a possible way is to build it as an app with Web technology, e.g. HTML, CSS, JavaScript, D3, React or React Native. Your requirements aren’t that demanding if you’re happy to cover iOS, Android, and desktop only (i.e. things that can run a browser, excluding some dedicated e-readers). It’s suitable for presenting things like a time lapse in-place animation of chart design (here’s the original, and these steps were made to improve on it) or show interactivity examples (filtering etc).

By Stephen Few. June 14th, 2016 at 10:01 am


I did not mention Tamara Munzer’s book because it belongs in a different category. It is a textbook that was written specifically for computer programmers who are studying information visualization at the college level. It is not a book for practitioners of data visualization.

By Stephen Few. June 14th, 2016 at 10:07 am


I’m not interested in redesigning my books to run in a web browser. The work involved would be tremendous and the results would be limited to particular devices. I want a platform that is specifically designed for books, which makes it easy to adapt something that was written for print to work well as an e-book. I was dumbfounded when I discovered that none of the existing e-book platforms are adequate.

By kris erickson. June 14th, 2016 at 11:56 am

Robert and Steve,

I’d like to see more books or articles on data-collaboration. Both real physical spaces and software that allows for simple non-linear ad-hoc interaction-flow between dashboards or data with many people.

Previously the model that I’ve seen is one in which an analyst (or journalist) would sit with the data for hours before presenting it. Looking for data, mucking with it, cleaning it, looking at different views and then come to a conclusion and dress it up and make it as simple as possible and present the findings in a linear fashion. Then I expect people to ‘get it’ like I do by only spending a few minutes looking at the information.

I’ve built an interactive YOY report I built (orange is this year’s trend, gray last year’s, redactions in gray boxes) and a simple image is located here: And our team put this up on an interactive short-throw projector and we all stood around and interacted with it and collaboratively answered questions. Previously I could barely get one person in my tiny cube to look at my little screen. Publishing a report means we all sat in our small cubes and looked at it separately. Projecting it from one computer means one person was the driver and others may be reluctant to take or give control. By interacting with the data together we are all moving along together and gaining similar insights.

By Alberto Cairo. June 14th, 2016 at 12:40 pm

—-added only their unique voices, not any content or unique means of expressing it that extends the practice of data visualization in a useful manner. Like you, I try to read every book that’s published in the field and generally find a bit of insight or two in each, but that doesn’t justify the existence of those books, especially when you must wade through page after page of old content—-

I think that it does. All knowledge disciplines that I have some familiarity with have tons of intro-level books. I have read no less than seven or eight intros to statistics or intros to journalism or intros to visual design. All those books cover largely the same material, and what makes them (slightly) unique is just the author’s voice and angle. There is value in that. Each new book adds to the knowledge acquired in the previous one.

I know that the word “ecosystem” is overused, but I think that it applies here. Reading one single book may give you some knowledge of a field; reading many of them, even if they are very similar to each other, makes that knowledge richer, deeper, and more nuanced.

By Stephen Few. June 14th, 2016 at 1:02 pm


I agree with you that reading multiple books on a topic can be useful, but only if those books provide content that is different from other books or presents the same content in a way that promotes better understanding. This is true of few books that have been written about the practice of data visualization during the last decade. If you feel differently, feel free to provide examples of content that doesn’t appear in “Show Me the Numbers” or that does appear but is presented in a way that extends our understanding of that content. I’ve observed that when these books depart from the content that exists in “Show Me the Numbers,” they often introduce errors and provide bad advice.

By Stephen Few. June 14th, 2016 at 1:31 pm


Let me add some more perspective to our discussion above. You and I are teachers and authors. It is worth our time to read multiple introductory statistics books because we are always looking for even slightly better ways to explain statistical concepts to our students and readers. Nevertheless, I would have paid twice the price of most of those introductory statistics books that I’ve read for a brief article that covered the few excerpts that I found useful. Unlike you and me, it is not worth the time of the typical person who wants to learn statistics at an introductory level to read several books that cover the same content with only minor variations. The same is true of most people who want to learn the basic principles and practices of data visualization.

By Stephen Few. June 14th, 2016 at 1:39 pm


The topic of collaborative data sensemaking is indeed worthwhile and it hasn’t been addressed sufficiently. I’ll try to address it in a basic manner soon. I think of collaborative data sensemaking as falling into three categories that affect design: 1) asynchronous collaboration (people are not working with the data simultaneously and are not in the same location), 2) synchronous collaboration across multiple locations, and 3) synchronous collaboration in the same location. The nature of the displays and the interactions must differ to some degree to suit these situations. Stay tuned for more to come, either in this blog or in a newsletter article. Thanks for the suggestion.

By Mark B.. June 16th, 2016 at 12:25 am

Unless I am missing the point, self service visualization tools have hit a limit in what can be achieved through exploring and visualizing raw data. Unless the next step is to explore methods to facilitate inferences (statistical models for example), then progression/understanding will not be made. This step require expertise in implementing and understanding such methods, and I doubt can be automated for off the shelf solutions.

The assumption that users without expertise in inference / decision making can make sense of raw data is a poor (and possible dangerous) one. Take for example the data visualization field. This is supposedly a field of trained experts who as a group produce poorly designed and reported experiments. If such a field of experts cannot design and execute adequate experiments, then, how can we expect the lay user to also do.

By Stephen Few. June 16th, 2016 at 8:41 am

Mark B,

It is definitely the case that inferential statistics require a great deal of statistical knowledge. Most visual analysis tools have incorporated some degree of support for inferential statistics, but none that I’m aware of are making it clear to their customers that inferential statistics requires a high level of skill. I don’t believe, however, that so-called “self-service” visualization tools have come anywhere close to supporting descriptive statics well. Even the best of these tools have far to go before they support descriptive statistics well. Even basic descriptive statistics requires a level of skill that few users of these tools possess. These basic skills are well within reach with proper training, but the self-service message of the vendors suggests that this training isn’t necessary.

By jlbriggs. June 16th, 2016 at 10:22 am

I have to say that I very strongly agree with this point:

Stephen said:

“Unlike you and me, it is not worth the time of the typical person who wants to learn statistics at an introductory level to read several books that cover the same content with only minor variations. The same is true of most people who want to learn the basic principles and practices of data visualization.”

The time, effort, and money to acquire and ingest all of these books is simply not an option.

To an extent I feel that at least some of the books that I have seen excerpts of are almost a preaching to the choir. I am not completely clear who the audience really is, other than other practitioners who already get most of the concepts, or other academics who could have written the same book, etc.

I don’t know where I stand in this discussion really, other than a vague sense of confusion, and a constant wondering of which of the books that I’ve added to my Amazon wishlist that I should actually buy :)

By Daniel Zvinca. June 18th, 2016 at 3:18 am

Very much agree with Mark B.

Are they any definitions universal accepted for basic level of knowledge in any field? What is the basic level of statistics required for a practitioner in data visualizatin field? What is the minimal level of knowledge in data visualization field a good statistician should have? Are they required any minimal knowledge in data storage and interogation for a data analyst? What is the basic level of knowledge a decent data sensemaking specialist should have in: data storage and extraction, statistics, data visualization besides strong field related knowledge (economics, logistics, transport, warehousing, production, quality, …)?

As long as these criteria are minimized by vendors advertising to the trivial level of drag and drop within a tool, the business inteligence projects (otherwise very profitable) will continue to fail.

By jlbriggs. June 21st, 2016 at 9:56 am

It also occurs to me at this point, that multiple voices saying the same thing isn’t really a bad thing.

Does each new book add a great new area of insight? Perhaps not.

But does a chorus of voices promoting very similar practices present a stronger message than a stoic few? I think there’s an argument for that.

By Stephen Few. June 21st, 2016 at 11:33 am


When I said that these books contain nothing new that is accurate and useful and nothing old that’s expressed in a uniquely useful way, I was not saying that they’re all saying the same thing. They aren’t. Each of these books includes errors and varying degrees of bad advice. If they all said the same thing and did so equally well, then readers could choose any of these books and get the same value. If this scenario were true, however, it would still produce waste in the form of redundant publication efforts and also in the form of unwanted repetition foisted on those who purchased and read multiple books, resulting is lost money and time. Imagine how much more productive it would be if authors only wrote books if they have something new to offer.

By Andy Cotgreave. June 22nd, 2016 at 8:25 pm

If I understand this correctly, when people ask which book I should recommend they read, should I point them to Willard Brinton’s Graphical Methods of Presenting Facts, since that covered most of the material covered afterwards by Tufte, yourself and others?


By Stephen Few. June 22nd, 2016 at 9:26 pm


You do not understand me correctly. You rarely do. I have the impression that you are not interested in understanding.

We’ve learned quite a bit since Brinton, especially about the perceptual and cognitive aspects of data visualization. What hasn’t changed much since Brinton’s time is the insistance of unskilled practitioners to repeat the mistakes of the past.

By Ben Jones. June 23rd, 2016 at 10:51 am

I celebrate all of the new voices in this budding and as yet nascent filed, whether they express existing knowledge in new ways or add to the existing body of knowledge with new insights. In fact, I encourage more and more people to add their own voice to the dialogue through whatever channel they find most effective – blogs, books, social media, or good old-fashioned face-to-face conversation.

Data literacy has a long way to go on this planet, and the next generation needs many talented practitioners and academics to continue pushing the boundaries forward in both small steps and big leaps. It’s an evolution that’s happening. It’s great to challenge people to make significant advances, I applaud you for that. What I am critical of here is that you are making such a contribution the bar of entry into the discussion.

Those who are just getting started often feel very intimidated by the data visualization community. Even if all they’re doing is giving their perspective about known principles and techniques, theirs is a unique perspective that adds to the developing dialogue. Let’s welcome them into this community, even if their initial contributions are tiny. How many of those will go on to take this field to entirely new places?

By Stephen Few. June 23rd, 2016 at 12:01 pm


I would be surprised if you didn’t celebrate all of the new voices in data visualization. You have two strong incentives to do so: 1) you count yourself among them, and 2) you work for Tableau, a software vendor that derives benefit from any voice in the data visualization space that doesn’t make negative comments about Tableau.

You seem to be saying that you support all new voices in the data visualization space regardless of what they say. Does the quality of their work not matter? Does it not matter if they provide poor guidance? Should we not vet the work of so-called data visualization experts? Data literacy is not advanced by bad advice any more than English literacy is advanced by a teacher who speaks English poorly.

The “bar of entry” into the ranks of data visualization expertise ought to be high. By its very definition, expertise means a “high level of skill.” Data visualization will not benefit by allowing anyone to proclaim themselves an expert regardless of skill. Progress in a field is never achieved by setting the bar low.

Rather than making sweeping claims about the benefits of all new voices, it would be more helpful if you would identify specific examples of useful contributions that appear in books published in the last decade. We who work with data in support of decision making should be in the habit of supporting our positions with evidence.

By Ben Jones. June 23rd, 2016 at 1:02 pm

It’s really sad to me that you assume to know my motivations, Steve. You’ve done that before to me in private communications, and I asked you to please not do that. If you knew me at all you would know that I connect and interact with many people in the data visualization community at large. I’m looking forward to speaking at Plotly’s conference in November. I recently reviewed Quartz’s new chart builder Atlas on my blog. I engage in both laudatory and critical conversations about all of these tools, including Tableau. I’m not a talking head. Writing me off as such is your choice, and I accept, not without a great deal of regret, that you have chosen to do so.

Regardless of what you see as my motivations for encouraging people to enter the data visualization dialogue and share their thoughts and expertise, I’ll continue to do so and hope that you do, too. We all started off reading (and loving) your books. I wish you had a different attitude, but again, that’s your choice.

I feel strongly that we DO also need voices to reflect the spirit of our time. I’ve learned a lot from even the most novice of practitioners because their first step was different than my own.

I’ll leave it at that, and I wish you all the best, Steve.

By Stephen Few. June 23rd, 2016 at 1:45 pm


I did not claim to know your motives. I pointed out that you count yourself among the new data visualization voices and that you work for a software vendor in the space. These are both incentives to support the new voices of data visualization. Whether or not these incentives motivate you and to what degree, I cannot say, but I must consider the possibility. One of the essential practices of critical thinking is to identify the potential incentives and biases of people when evaluating their statements. If you don’t want your motives to be questioned, you shouldn’t be working for a data visualization software vendor. Would you question the motives of a commentator about gun control who works for the NRA? I would hope so.

Have you been publicly critical of Tableau? If so, I’m surprised and would appreciate it if you would direct me to this criticism. It will help me understand your perspective.

To correct a point that you made, we were not discussing your “motivations for encouraging people to enter the data visualization dialogue and share their thought and expertise.” In this blog article to which you responded I expressed my concern that few of the data visualization generalists who have published books in the last decade have added anything of value. I have never discouraged people from entering into dialog about data visualization. In fact, my work is dedicated to encouraging this. How you and I seem to differ is that I want the dialog to be productive, which requires guidance. I want people to learn from it. Not every dialog results in learning. In fact, many of the blogs, articles, books, and dialogs about data visualization of recent years have led to a regression rather than progression in the field. This should concern you, but the fact that you “celebrate all of the new voices in this budding and nascent field” indicates that error and poor advice does not concern you. You cannot encourage progress by celebrating mediocrity.

The purpose of this blog post was to encourage new voices to have something useful to say. I’m not sure what you mean when you say that we “need voices to reflect the spirit of our time.” What is this spirit to which you refer? Much of the spirit of our time does not deserve to be encouraged.

Like you, I learn a great deal from novice practitioners. My workshops are filled primarily with novices. I cherish the opportunity to help them get started on the right foot. Their questions and insights add great value to my workshops. I encourage them to learn the basics of data visualizaton well enough to think for themselves. As a teacher with expertise in data visualization, I provide feedback and direct my students and readers toward best practices. I don’t coddle them and give them awards for participation alone. I encourage them to pursue excellence in their work. I’m a teacher, not a marketer.

By Ben Jones. June 23rd, 2016 at 2:32 pm

Okay, you baited me. I’ll comment one final time to answer the things you’ve incorrectly assumed about me. Please do know that it’s very belittling to make assumptions about people without having taken the time to engage in conversation with them or listen to them at all.

1. I routinely engage in discussions with the user community both in private and in public about what we can do at Tableau to make our software even better. That’s actually part of the company culture. We all actively look for pain points and seek to understand and address them. For one example of many, here I agree wholeheartedly with Shawn Wallwork that double clicks on maps and charts in Tableau can be “pretty disorienting” (they can be!).

2. I appreciate, use and teach my UW students many different data tools that are currently available. Each have their own strengths and weaknesses. I laud the talented Quartz team for launching Atlas and I reviewed the newly launched product.

3. I read, thoroughly enjoyed and reviewed Cole Nussbaumer Knaflic’s helpful book “Storytelling with Data” in which, besides echoing and reinforcing many helpful principles in a new and accessible voice, she adds something that I haven’t personally come across heretofore by applying lessons from theater, cinema and fiction to the craft of data storytelling, including determining flow and storyboarding in chapters 7 and 8.

I’m not going to defend myself against your accusations any longer. I’m a marketer, teacher and practitioner. It’s actually quite possible to carry out all of these different roles in a conscientious way :)

And with that I bid you good day. I don’t have any more time to devote to this thread. I have other things to take care of now. Take care, Steve. Please do know that you’re a data hero to many, and this kind of discourse actually detracts from your unique ability to bring data literacy to the masses. That’s the spirit of our time.

By Stephen Few. June 23rd, 2016 at 3:19 pm


I definitely wasn’t trying to bait you. You began this discussion by posting comments here in my blog. I have merely responded to your comments. You’re welcome to abstain from commenting in the future if you’d prefer that I refrain from responding.

Your agreement with Shawn Wallwork that double clicks on maps and chart in Tableau “can be ‘pretty disorienting'” is a rather timid criticims of the product that you’re paid to promote. Marketers and salespeople are trained to acknowledge a minor error in their products to give the appearance of objectivity. The rather tepid comments that you wrote about Atlas from the Quartz Team of GE (one of your large customers), a product that doesn’t compete with Tableau, does not demonstrate objectivity.

Regarding Cole Nussbaumer Knafic, it is absolutely true that Cole included a section about storytelling with data that applies lessons from theater. The content was largely derived from Nancy Duarte’s thoughtful work. The fact that Cole included this information in a book about data visualization is indeed potentially useful as an introduction to the importance of developing presentation skills if your job calls for them.

You have no choice but to defend yourself against reasonable suspicions that, as a member of Tableau’s marketing department, you are biased. While it is true that you could possibly, with heroic effort, play this role and yet remain completely objective about data visualization principles, practices, and tools, it is unlikely. You cannot be employed by a data visualization vendor and expect the world to leave your objectivity unquestioned. Rather than acting offended when your objectivity is questioned, why not simply embrace your role as a Tableau marketing professional and be upfront about your bias. If you were not biased in favor of Tableau, you would have been fired long ago.

Your final comment left me puzzled. I suspect others who read it will find it puzzling as well. You identify the “spirit of our time” as one that does not appreciate critical discourse. If this is true, I will resist the spirit of our time. I believe that my work is respected by readers and students because I approach data visualization critically, not in spite of this fact.

By jlbriggs. June 24th, 2016 at 6:09 am


“For one example of many, here I agree wholeheartedly with Shawn Wallwork that double clicks on maps and charts in Tableau can be “pretty disorienting” (they can be!).”

Come on.


You want this to be the example of how you are objectively critical of Tableau?

While I think there is plenty of room for disagreement on this subject, these posts come across completely as whining and stamping of feet – purely emotional responses without substance.

I feel certain you have substantive counter arguments to this post, and would love to hear them. I am having trouble finding a good way to articulate the ways in which I disagree with Stephen on this point, and am hoping that someone else can take on that job in a constructive way.

I certainly do have to agree with Stephen that if the “spirit of our time” is “OMG how dare you criticize?!”, then, no thanks.

By Enrico Bertini. June 24th, 2016 at 10:03 am


I don’t feel like I am learning much from this debate.

More interesting to me would be to discuss what books we’d actually like to see happening.

I’ll start first with my partial list of ideas off the top of my head …

1) Text Visualization. Text is very poorly covered by existing books and it’s of hugely practical importance. There are lots of companies and organizations that are interested in looking into text data sets and the existing vis practices are incredibly poor (e.g., Wordle makes me itchy) and research is not exceptionally good either (with exceptions). I’d love to see a book focusing only on this aspect. I think there is a lot of space to instruct practitioners and inspire researchers in this area.

2) Designing Data Visualization User Interfaces (or Interactive Data Interfaces). Virtually all vis books (your “Now You See It” is an exception) focus very very heavily on static representation. But, what gets me excited by vis is it’s powerful analytical capabilities, that is, how people can make sense of complex realities by interacting with data. Tamara Munzner’s book covers a bit of that but I think it’s not enough. Helping people design effective interactive data interfaces is really hard and we need more guidance. One particularly deleterious trend, in this area, is the heavy focus on technology. Interaction is not about technology, it’s about how to translate questions, into actions, into reactions, into questions, etc.

3) Visual Machine Learning. Machine learning is behind tons of automated decisions and systems we use today and this trend is unstoppable. There will be way more of that in the future. There are two extremely interesting aspects of how vis should contribute here. First, ML models are terribly opaque and sensitive to choices and modelers need a) a methodology and b) effective tools to better understand what models do and what their implications are. As things look now, people are shooting in the sky. We also need better ways to incorporate domain knowledge into these systems. Second, there are many methods in ML that can make visualization way more powerful, but they are not well integrated. I’m always amazed how poorly integrated unsupervised/exploratory data mining methods (as well as statistical methods by the way) are with interactive visualization. They would be such a powerful combo together! I am thinking of methods such as data clustering, rules, etc. P.s. It’s unfortunate people are so excited about predictive modeling when exploratory methods look so promising.

4) Data Visualization Case Studies. Humans learn a lot from examples and have an amazing ability to extrapolate from specific cases. I am always frustrated to see how little there is to learn about how a team or a single expert went from ideation to deployment and what happened in between. We always see the end product, but very rarely the process and the struggle of creating excellent final results. Tamara Munzner has done some really good work with a couple of recent papers but I’d love to see a whole and more accessible book with carefully crafted stories of how some really good experts when from A to Z. That would also be very entertaining I guess.

Here is my quick list. I guess there is way way more to add (luckily for us!).

My hope is that we can move on with this conversation and focus on the positive side of it: we need to push this field forward.

Whether one agrees with you or not, there is no doubt we need more books and way more work. So, let’s do the work!

Take care.

By Stephen Few. June 24th, 2016 at 10:28 am


Your suggestions for useful books are inspiring and helpful. This is precisely the kind of response that I was hoping to encourage. The “machine learning” topic deserves a great deal of discussion, but I’ll refrain from taking this particular blog post in that direction. A great deal of thought must be invested in drawing appropriate lines between the work of humans and the work of compuaters, which many machine learning projects fail to consider. Useful data visualization case studies are desperately needed. Doing them properly is time-consuming and expensive, but they need to be done. It’s important that these case studies be planned in advance of the data visualization project that they evaluate, for learning about failed projects would be just as informative as studies of successful projects. I absolutely agree that little has been written about interactive visual analysis interfaces. I’ve considered writing designe guidelines for “analytical applications”–custom applications that are used to perform a particular set of analytical tasks–but I haven’t committed to the project yet. I’m not convinced that “text visualization” is all that different from data visualization in general. It is usually done best using standard data visualization techniques. For example, word clouds, which are analytically impoverished, could be replaced with treemaps or better yet with bar graphs such as those that I proposed in my article about “wrapped graphs.” I realize that there is much more to the topic than this, however. Whether text analytics requires unique data visualizations or not, I agree that we’re in need of better written resources about this useful type of analysis.

By Allan Walker. June 28th, 2016 at 10:51 am


I see the effect of “data visualization lite” scenarios being played out on a daily basis.

I see “good practices” (and I often hold up your examples from “Special Interview with Stephen Few and Data Visualization expert” by Alexander ‘Sandy’ Chiang, Research Director, Dashboard Insight, Common Pitfalls, Effectively Communicating numbers… ) frequently challenged by inappropriate design and chart type selection; here, I believe your frustration is that of a teacher, the children are not listening or comprehending your lessons/guidance; in fact, they may actually be rebelling in opposition of “rule-setting” or standards, although in my personal experience, it’s usually innocent ignorance.

Optimistically, I believe data visualization has grander scope and is maturing. Outputs in media such as data driven animation (complete with stories) such as 422 South’s “Data Visualization Reel” shine the light for me. However, As data visualization becomes ever more interactive, more tactile, more immersive; I’m unsure the written word (other than technical guidance) is the most effective educational medium anymore. I watch the most wonderful and engaging TED talks with data visualization practitioners who share their knowledge and wisdom.

By Stephen Few. June 28th, 2016 at 11:15 am


Other than Hans Rosling, what data visualization practitioners have been featured in TED talks who have provided useful content? I’m not aware of any. I was scheduled to give a TED talk several years ago but backed out when they sent me a contract two days before the talk that contained terms that were unacceptable. They wanted the right to edit and repurpose the content of my talk and to combine it with content delivered by others as they saw fit, without my permission. Imagining TED combining my content with that of David McCandless was a nightmare I couldn’t accept.

I just now watched the 3-minute lightshow created by 422 South. This isn’t useful information visualization, for it doesn’t inform. Can you tell me what you learned by watching the light trails buzz around the globe? If there is a story in there somewhere worth telling, it could be told in a manner that is both informative and engaging. This illustrates my concern with data visualization today. Most of what’s getting attention as “data visualization” today is little more than uninformative entertainment designed to appeal to people who are satisfied watching dancing streams of color. This is worse than “data visualization lite;” it is a trivialization of data visualization, divesting of content for lazy minds.

By Allan Walker. June 28th, 2016 at 8:45 pm


I apologize, I should have put the 422 South piece in context for you, and your readers. The show-reel on its own, in my opinion, acts only as a technology showcase, giving me, a practitioner, an understanding of how to visualize and process large amounts of spatiotemporal data.

However, in context, and to re-frame, these outputs were part of my learning and understanding of flow, movement, and interplay as they were embedded into such documentaries as PBS “America Revealed”, where the accompanying audio commentary complements the visuals.

Yes, these are “just” a scatterplot with trails, but (I certainly find) visualizing x,y,z,t non-trivial and has been historically computationally expensive.

Aaron Koblin, whose TED talk I enjoyed, inspired me with his “flight patterns” visualization project, I see the 422 South output as an extension of this learning.

Jer Thorp, another TED “talker” is another data visualization practitioner whose work I admire, I find found his content useful.

I admire your ethic and stance with the TED platform, and understand your right to protect your intellectual property and “brand”.

By Stephen Few. June 29th, 2016 at 7:57 am


The 422 South piece does not provide practitioners with “an understanding of how to visualize and process large amounts of spatiotemporal data.” Little information comes through in these animated streams of colored lights. When they appear in a PBS documentary, at best they serve as a cool visual effect that says, “Look here — a lot is happening.” Without narration, these animations are nothing but eye candy. They complement the narration as artistic flourishes, not as information. When the creators of these animations possess more than a superficial understanding of the underlying data, they did not acquire that understanding by watching these animations, but by using other forms of display that are more informative.

Aaron Koblin and Jer Thorp are not involved in the same field of work as I am. Great confusion has been caused by extending the term “data visualization” to “data art.” Koblin and Thorp are artists. I can appreciate their work as art, but not as visualizations that can be used to explore, analyze, and communicate information.

Advancements in data visualization have been significantly set back by this confusion between data visualization and data art.

By Shawn Wallwork. September 8th, 2016 at 11:14 am

I just discovered this thread. Since I was mentioned in the exchange between Stephen and Ben, I suppose I should comment. Ultimately I’m confused by the exchange. Ben works for a toolmaking company. Stephen your blog is about the evolution (or lack thereof) of an entire industry that produces data visualizations, and the quality of the ‘experts’ guiding this industry. How exactly is a toolmaker’s opinion in a discussion of the evolution of an industry disqualified because he is a toolmaker. Should we disqualify your opinion on this entire subject since you are a book writer, whose book sales will benefit if there have been no better ones written since you wrote yours?

I think there’s a pretty good case to be made that this entire thread should be ignored because you wrote “Show Me the Numbers”. Much more than disqualifying Ben just because he works for Tableau, a company I don’t work for. But I do love/hate their tool!


By Stephen Few. September 12th, 2016 at 10:09 am

Hi Shawn,

We all have biases. It is certainly reasonable to consider book sales as an incentive that creates a bias in my perspective. You have no way of knowing for sure that my desire for better data visualizations resources, including books and tools, is genuine, regardless of how they would affect my revenues. You do well to consider the incentives and biases of everyone who exercises influence in our industry, including mine.

By oryxagency. January 12th, 2017 at 4:17 am

Many visual analysis tools have incorporated some degree of support for inferential stats, but none that I am just aware about are rendering it clear for their customers that inferential reports requires a high level of skill. I no longer believe, yet , that so called “self-service” visualization tools have come anywhere near helping descriptive statics well. Also the best of these tools have far to go before they support descriptive statistics well. Sometimes basic descriptive statistics requires a level of skill that few users of these tools possess.

Leave a Reply