| |
|

|
Thanks for taking the time to read my thoughts about Visual Business
Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions
that are either too urgent to wait for a full-blown article or too
limited in length, scope, or development to require the larger venue.
For a selection of articles, white papers, and books, please visit
my library.
|
| |
October 18th, 2013
Review of the Research Study “What Makes a Visualization Memorable?”
Michelle Borkin, et. al. (Harvard School of Engineering and Applied Sciences and MIT)

No topic within the field of data visualization has created more heated debate over the years than that of “chart junk.” This is perhaps because, when Edward Tufte first introduced the concept, he did so provocatively, inviting a heated response. Ever since, this debate has not only flourished without signs of cessation, but it has generated some of the least substantive and defensible claims in the field. I’ve contributed to this debate many times, always trying to rein it back into the realm of science. Whenever a research study that appears to defend the usefulness of chart junk is published, the Web immediately comes alive with silly chatter, consisting mostly of chest thumping: “Ha, ha! Take that!” The latest study of this ilk was presented this week at the annual IEEE VisWeek Conference by Michelle Borkin, et. al. (students and faculty at Harvard and MIT), titled “What Makes a Visualization Memorable?” Yeah, you guessed it, apparently it’s chart junk.
When I last attended VisWeek in 2011, my favorite research study was presented by this same researcher, Michelle Borkin. Her study produced a brilliant, life-saving visualization of the coronary arteries that could be used by medical doctors to diagnose plaque build-up that indicates heart disease. It was elegant in its simplicity and clarity. Borkin’s latest study, however, does not resemble her previous work in the least. Here’s the paper’s abstract in full:
An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: “What makes a visualization memorable?” We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon’s Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards how to design effective visualizations.
The authors collected a large set of data visualizations from the Web. Each visualization was coded by the research team for various characteristics (type of visualization, number of colors, data-ink ratio, the presence of pictograms, etc.) During a test session, subjects were shown one data visualization at a time for one second each, followed by a 1.4 second period of blank screen before the next visualization would appear. Each session displayed approximately 120 visualizations. The test was set up as a game with the objective of clicking whenever a visualization that appeared previously appeared a second time. A particular visualization never appeared more than twice. Hits (the subject indicted correctly that the visualization had appeared previously) and false hits (the subject incorrectly indicated that a visualization had previously appeared when it hadn’t) were both scored, but misses were not. The study’s objective was to determine which of the characteristics that were coded caused visualizations to be most memorable.
Any form of presentation, be it a book, speech, lecture, infographic, news story, or research paper, to name but a few, should be judged on how well it achieves the author’s objectives and the degree to which those objectives are worthwhile. A research paper in particular should be judged by how well it does what the authors claim and how useful its findings are to the field of study. This study does not actually do what it claims. What it actually demonstrates is quite different from the authors’ claims and does not qualify as new information.
The title of this study, “What Makes a Visualization Memorable?,” is misleading. It doesn’t demonstrate what makes a visualization memorable. A more accurate title might be: “When visualizations are presented for one second each in a long series, what visual elements or attributes most enable people to remember that they’ve seen it if it appears a second time?” That’s a mouthful and not a particularly great title, but it accurately describes what the study was actually designed to test. The study did not determine what makes a visualization memorable, but what visual elements or attributes included in the visualization would be noticed when viewed for only a second and then recognized when seen again. A data visualization contains content. Its purpose is to communicate that content. A visualization is not memorable unless its content is memorable. Merely knowing that you saw something a minute or two ago does not contribute in any obvious way to data visualization. And, more fundamentally, remembering something about the design of a visualization is nothing but a distraction. Ultimately, only the content matters; the design should disappear.
When an image appears before your eyes for only a second and then disappears, what actually goes on in your brain perceptually and cognitively? When the image is a visualization, you don’t have time to even begin making sense of it. At best, what happens in that brief moment is that something catches your eye that can be stored as a distinct memory. When the task that is being tested is your ability to recall if you’ve seen the image before when it’s flashed in front of your eyes a second time, then it’s necessary that the memory differentiates the image from the others that are being presented. If a clean and simple bar graph appears, there is nothing unique, no differentiator, from which to form a distinct memory. At best in that single second that you view it the concept “bar graph” forms in your brain, but you’re seeing many bar graphs and nothing about them is being recorded to differentiate them. If you see something with a profusion of colors, that colorful image is imprinted, which can serve as a distinct memory for near-term recall. If you see a novel form of display, a representation of that novelty can be retained. If you see a diagram that forms a distinct shape, it can be temporarily retained. What I’m describing is sometimes called stickiness. Something sticks because something about it stood out as memorable. That something rarely has anything to do with the content of the visualization.
Visualizations cannot be read and understood in a second. Flashing a graph in front of someone’s eyes for a second tells us nothing useful about the graphical communication, with one possible exception: the ability to grab attention. Knowing this can be useful when you are displaying information in a context that requires that you first catch viewers’ eyes to get them to look, such as in a newspaper or on a public-facing website. This potential use of immediate stickiness, however, was not mentioned in the study.
So, when the authors of this study made the following claim, they were mistaken:
Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.
Whether the assertion is true or not, this study did not test it. They went on to say:
Clearly, a more memorable visualization is not necessarily a more comprehensible one. However, knowing what makes a visualization memorable is a step towards answering higher level questions like “What makes a visualization engaging?” or “What makes a visualization effective?”.
Although the first sentence is true, what follows is pure conjecture. The authors seemed to wake up toward the end of the paper when they stated:
We do not want just any part of the visualization to stick (e.g., chart junk), but rather we want the most important relevant aspects of the data or trend the author is trying to convey to stick.
Yes, this statement is absolutely true. Unfortunately, this study does not address this aspect of stickiness at all. Sanity prevailed when they further stated:
We also hope to show in future work that memorability — i.e., treating visualizations as scenes — does not necessarily translate to an understanding of the visualizations themselves. Nor does excessive visual clutter aid comprehension of the actual information in the visualization (and may instead interfere with it).
If they do go on to show this in the future, they will have succeeded in exposing the uselessness of this paper. If only this realization had encouraged them to forego the publication of this study and quickly move on to the next.
If we reframed this study as potentially useful for immediately catching the reader’s eye and that alone, the following findings might have some use:
Not surprisingly, attributes such as color and the inclusion of a human recognizable object enhance memorability. And similar to previous studies we found that visualizations with low data-to-ink ratios and high visual densities (i.e., more chart junk and “clutter”) were more memorable than minimal, “clean” visualizations.
More surprisingly, we found that unique visualization types (pictoral [sic], grid/matrix, trees and networks, and diagrams) had significantly higher memorability scores than common graphs (circles, area, points, bars, and lines). It appears that novel and unexpected visualizations can be better remembered than the visualizations with limited variability that we are exposed to since elementary school.
As I mentioned in the beginning, however, these are not new findings. It’s interesting that finding described in the second paragraph above contradicted the authors’ expectations. They assumed that familiar visualizations, such as bar and line graphs, would be more memorable than novel visualizations. We’ve known for some time that novelty is sticky. The wonderful book by brothers Chip and Dan Heath, Made to Stick, made a big deal of this.
The one part of this study that I found most interesting and informative was a section that wasn’t actually relevant to the study. The authors quantified the number of times particular types of visualization appeared in four particular venues: scientific publications, infographics, all news media, and government and world organization. I found it interesting to note that news media of all types use bar and line graphs extensively, but infographics seldom include them. It was also interesting that tables supposedly appear much more often in infographics than in scientific publications, which doesn’t actually ring true to my experience.
A few other problems with the study are worth mentioning:
- The authors created a new taxonomy for categorizing visualizations that wasn’t actually useful for the task at hand. When revealed for only a second, there is nothing that we could reliably conclude about the comparative memorability the visualization types defined by their taxonomy. Because their taxonomy did not define visualization types as homogenous groups, comparisons made between them are meaningless. For example, grouping all graphs together that show distributions (histograms, box plots, frequency polygons, strip plots, tallies, stem-and-leaf plots, etc.) is not useful for determining the relative memorability of visualization types.
- They described bars (rectangles) and lines (contours) as “not natural,” but diagrams, radial plots, and heat maps as “more natural” and thus more memorable. From the perspective of visual perception, however, few shapes are more natural than rectangles and contours, which represent much of our world.
- I found it interesting that the racial mix of participants in the experiment (41.7% Caucasian, 37.5% South Asian, 4.2% African, 4.2% East Asian, 1.1% Hispanic, and 11.3% other/unreported) was considered by the authors to be “sampled fairly from the Mechanical Turk worker population.” When did Mechanical Turk become the population that matters? Wouldn’t it be more useful to have a fair sample of the general population? A 37.5% proportion of South Asians is not at all representative of the population in the United States in particular or the world in general, nor are 4.2% African and 1.1% Hispanic representative.
I’ve yet to see a useful study about chart junk in the last decade or so. Perhaps there’s something about the controversial nature of the debate and the provocative nature of claims that chart junk is useful (e.g., the possibility of knocking Tufte and Few down a notch or two) that shifts researchers from System 2 thinking (slow and rational) into System 1 (fast and emotional). Despite the flaws in this study, just like the others that have preceded it, dozens of future studies will cite it as credible and people will make outlandish claims based on it, which has already begun in the media.
Take care,

September 30th, 2013
I suspect that one of the reasons why people are drawn to pie charts is the fact that these charts are familiar from elementary school instruction in the meaning and mathematical use of fractions. Based on this instruction, a pie chart is the image that becomes strongly associated with the parts-of-a-whole concept (a.k.a., fractions). But, just because this is how fractions have been traditionally taught in schools, should we assume that pie charts are the best visual representation for learning fractions? Although the metaphor is easy to grasp (the slices add up to an entire pie), we know that visual perception does a poor job of comparing the sizes of slices, which is essential for learning to compare fractions. Learning that one-fifth is larger than one-sixth, which is counter-intuitive in the beginning, becomes further complicated when the individual slices of two pies—one divided into five slices and other into six—look roughly the same. Might it makes more sense to use two lines divided into sections instead, which are quite easy to compare when placed near one another?

This not only makes sense based on our understanding of visual perception, but recent research has demonstrated that it in fact works better for learning. Take a moment to read the recent article about this by Sue Shellenbarger in The Wall Street Journal entitled “New Approaches to Teaching Fractions” (September 24, 2013).
Take care,

September 16th, 2013
Recently, I received an email from a fellow name Mark Ostroff who has written a guide to designing “accessible” content using the Oracle Business Intelligence Suite (OBIEE). In particular, the guide addresses issues regarding impaired vision, such as colorblindness and total blindness. Despite the fact that Mark began by saying that he and I “could be ‘twins separated at birth’ in our orientation about business intelligence,” by the second email in our conversation it became clear that he had a bone to pick. He accused me of shirking my responsibility by not teaching people to design information displays in ways that are accessible to the blind—dashboards in particular. Actually, his accusation was a bit harsher. He suggested that, by failing to teach people to design dashboards in ways that were accessible to the blind, I was encouraging my clients to break the law. Mark’s bold accusation prompted me to write about this issue.
I’ll begin by stating my fundamental position: a dashboard that is accessible to the blind is a contradiction in terms. “A dashboard is a visual display of the most important information needed to achieve one or more objectives, consolidated and arranged on a single screen so the information can be monitored at a glance” (Few, 2005). No forms of data visualization, not just dashboards jam-packed with graphics, can be made fully accessible to someone who is blind. I am not insensitive to the needs of people who are visually or otherwise impaired. I am merely pointing out what anyone who understands data visualization knows: no channel of perception other than vision can fully duplicate the contents of graphs. Similarly, what someone can communicate through the audio channel in the form of music cannot be fully expressed visually. If it could, why bother performing or recording music? Why not just distribute the written score? Vision is unique in its abilities to inform and enable thinking. Those who lack vision can develop their other senses to compensate to an amazing degree, but never in a way that fully duplicates the visual experience.
The information that is displayed in a dashboard can and should be presented to people who are blind in a different form when needed. Despite Mark’s bold challenge, current laws regarding accessibility require some organizations—mostly government—to provide the information contained in something like a dashboard in a way that is accessible to the blind, not necessarily to make the dashboard itself accessible. Unfortunately, an alternative form of presentation will not convey all of the information contained in a well-designed dashboard and it won’t communicate the information as efficiently, but if someone who is blind needs the information, it behooves us to provide a reasonable, even if imperfect, alternative. The alternative, however, will not be a dashboard. By definition, a dashboard is a visual display, because the visual channel provides the richest and most efficient means of presenting information for monitoring purposes, which no other channel can match—not even close. If airlines were required by law to provide flight-phobic customers with an earthbound form of transportation, that alternative would be called a train or a bus, not an airplane. In like manner, a means of monitoring that uses braille or a screen reader as its medium should not be called a dashboard. There’s enough confusion about the term already. Let’s not muddy it further.
When quantitative information is presented graphically, it offers the following advantages over written or spoken words and numbers:
- Patterns in the values are revealed
- Series of values (e.g., 12 months worth of sales revenues) are chunked together into visual objects (e.g., a line in a line graph), which makes it possible for us to see the entire series at once and compare it to other entire series of values, thus augmenting the capacity of working memory
- Much more information can be presented in the limited space that’s available on the page or screen
- The visual cortex processes the graphical information in parallel and more efficiently than the slower, sequential process that’s required for language processing
Data visualization is not only useful, it is finally being recognized as essential. It’s hard to imagine how any other channel of perception will ever be able to provide viable alternatives for these advantages of vision. It certainly isn’t possible to come close to doing this now.
I support the Americans with Disabilities Act (ADA). The ADA became law to prevent discrimination against people with disabilities. It does not, however, heal disabilities. It cannot give sight to the blind. It can require that organizations remove roadblocks to equal rights for those with disabilities and accommodate them in reasonable ways, but it should never try to equalize the playing field between those with sight and those without by forcing those with sight to wear blindfolds. Unfortunately, some efforts to expand accessibility venture into this territory, and I find that intolerable.
Mark seems to believe that all dashboards should be designed so that every bit of information is accessible to a screen reader to accommodate the needs of those without sight. To do this, a great deal of information would have to be added to dashboards and much of it would have to be expressed in inferior ways to make the contents of a dashboard accessible to a screen reader. Despite Mark’s good intention, this would result in dashboards unworthy of the name. The experience of those with sight would be unnecessarily compromised to a costly degree. I say unnecessarily, because the needs of the blind would be better served by a separate display that was designed specifically for a screen reader without compromising the design of the original dashboard. This approach, rather than the way that Mark advocates, would result in less time, effort, and cost. We should approach accessibility intelligently. What might work for a general purpose website might not work for a dashboard. One size definitely does not fit all.
It was hard for me to imagine what Mark had in mind as an accessible dashboard, so I downloaded his guide to take a look. I quickly learned that his idea of a dashboard is quite different from anything that I would qualify as such. Here’s an illustration from the guide:
What he calls a dashboard looks a lot like an online report with a couple of tables on it. A few graphs do appear in the guide, and Mark suggests that they should be made accessible to those who are colorblind in the following manner:
That’s right—according to the guide, crosshatching should be used in addition to colors. Crosshatching can create an annoying shimmering effect known as moiré vibration. This affects people who are colorblind as much as anyone. What this recommendation fails to take into account is the fact that people who are colorblind can see color (except for extremely rare cases of complete color blindness), they just can’t discriminate particular colors, primarily red and green. Avoiding combinations of colors that those who are colorblind cannot discriminate solves the problem without resorting to the scourge of crosshatching.
Despite a search, I failed to find anything in the accessibility guide that explained how information contained in graphs (i.e., images) and thus inaccessible to screen readers could be communicated to those without sight. Text descriptions can be attached to a graph that can be accessed by screen readers, but those descriptions would not contain any information about the values in the graph. Apparently, a dashboard that is accessible to the blind would need to eliminate graphics altogether. As I said before, the result would not be a dashboard. When accessibility to information in dashboards is needed by those who are blind, it currently works best to give them an alternative that displays text and tables of values formatted for easy accessibility by screen readers. A table, even though it information, such as patterns of change and the means of comparing entire series of values, but no automated presentation of the data that isn’t visual could achieve that. At best, someone could write a description of the patterns and summarize the story contained in the graph with words, but that would require human intervention, which cannot be automated—at least not yet.
We should be concerned about accessibility to information, not only for those with disabilities. Good design makes information accessible. It is a sad fact of life, however, that everything cannot be made equally accessible to everyone. People differ in ability and experience. Accessibility is achieved by understanding these differences and designing communications in a way takes them into account. Accessibility is not achieved by slighting one audience in an attempt to meet the needs of another. So far, the business intelligence (BI) industry in general has not taken even the shared needs of humans into account, let alone the unique needs of particular groups. I’m not surprised that Oracle’s attempt to accommodate the needs of the visually impaired fails to exhibit thoughtful design. Oracle’s approach to accessibility so far is simpleminded, and certainly is not worthy of the name “business intelligence.”
Take care,

August 12th, 2013
This summer I’ve been spending most of my time working on a new book. The current working title is Signal. As the title suggests, this book will focus on analytical techniques for detecting signals in the midst of noisy data. And guess what? All data sets are noisy. In fact, at any given moment, most of the data that we collect are noise. This will always be true, because signals in data are the exception, not the rule.
Signal detection is actually getting harder with the advent of so-called Big Data. By its very nature, most Big Data will never be anything but noise. Collecting everything possible, based on the Big Data argument that the costs of doing so are negligible and that even data that you can’t imagine as useful today could become useful tomorrow, is a dangerous premise. The costs of collecting and storing everything extend far beyond the hardware that’s used to store it. People already struggle to use data effectively. This will become dramatically harder as the volume of data grows. Finding a needle in a haystack doesn’t get easier as you’re tossing more and more hay on the pile.
Most people who are responsible for data analysis in organizations have never been trained to do this work. An insidious assumption exists, promoted by software vendors, that knowing how to use a particular data analysis software product “auto-magically” imbues one with the skills of a data analyst. Even with good software—something that’s rare—this is far from true. Just as with any area of expertise, data analysis requires training and practice, practice, practice. Because few people whose work involves data analysis possess the required skills, much time is wasted and money lost as analysts pore over data without knowing what to look for. They end up chasing patterns that mean nothing and missing those that are gold. Essentially, data analysis is the process of signal detection.
Data that do not convey useful knowledge are noise. When data are displayed, noise can exist both as data that don’t provide useful knowledge and also as useless non-data elements of the display (e.g., irrelevant visual attributes, such as a third dimension of depth in bars, meaningless color variation, and effects of light and shadow). Both sources of noise must be filtered to find and focus on the signals.
When we rely on data for decision making, what qualifies as a signal and what is merely noise? In and of themselves, data are neither. Data are merely facts. When facts are useful, they serve as signals. When they aren’t useful, data clutter the environment with distracting noise.
For data to be useful, they must:
- Address something that matters
- Promote understanding
- Provide an opportunity for action to achieve or maintain a desired state
When any of these qualities are missing, data remain noise.
Signals are always signs of something in particular. In a sense, a signal is not a thing but a relationship. Data become useful knowledge of something that matters when they connect understanding to a question to form an answer. This connection (relationship) is the signal.
As I work on this book to define the nature of signals and to describe techniques for detecting them, I could benefit from your thoughts on the matter. In your experience, what data qualify as signals? How do you find them? What do you do to understand them? What do you do about them once found? What examples have you seen in your own organization or others of time wasted chasing noise. What can we do to reduce noise? Please share with me any thoughts that you have along these lines.
Take care

June 26th, 2013
I recently read the most thorough, thoughtful, and cogent treatise on technology that I’ve ever encountered: To Save Everything Click Here: The Folly of Technological Solutionism, by Evgeny Morozov.
My attraction to this book is not without bias. Morozov seems to view technology—its potential for both good and ill—much as I do, but the technologies that reside within his purview, the depths to which he’s studied them, and the disciplines on which he draws to understand them, exceed my own. His approach and grasp is that of a philosopher.
Morozov decries technological solutionism.
Alas, all too often, this never-ending quest to ameliorate—or what the Canadian anthropologist Tania Murray Li, writing in a very different context, has called “the will to improve”—is shortsighted and only perfunctorily interested in the activity for which improvement is sought. Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!—this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address.
I call the ideology that legitimizes and sanctions such aspirations “solutionism.” I borrow this unabashedly pejorative term from the world of architecture and urban planning, where it has come to refer to an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions—the kind of stuff that wows audiences at TED Conferences—to problems that are extremely complex, fluid, and contentious…Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching “for the answer before the questions have been fully asked.” How problems are composed matters every bit as much as how problems are resolved. (pp. 5 and 6)
This book exposes the threat of solutionism and proposes healthier ways to embrace and benefit from technologies.
The ultimate goal of this book…is to uncover the attitudes, dispositions, and urges that comprise the solutionist mind-set, to show how they manifest themselves in specific projects to ameliorate the human condition, and to hint at how and why some of these attitudes, dispositions, and urges can and should be resisted, circumvented, and unlearned. For only by unlearning solutionism—that is, be transcending the limits it imposes on our imaginations and by rebelling against its value system—will we understand why attaining technological perfection, without attending to the intricacies of the human condition and accounting for the complex world of practices and traditions, might not be worth the price. (p. xv)
If you’ve spent much time listening to or reading the words of Silicon Valley’s prominent spokespersons (Kevin Kelly of IDEO, Mark Zuckerberg of Facebook, Eric Schmidt of Google, to name a few) you might have noticed that they tend to speak of technology as if it were spelled with a capital “T.” For them, Technology is a sentient being with purpose that, much like the God of evangelicals, has a wonderful plan for our lives. It is our job as believers to embrace Technology and let it lead us to the promised land, for it exceeds us in wisdom and power, and is unquestionably good. I’ve provided training and consulting services for many of the technology companies that preach this gospel. During these engagements, I do my best to moderate their techno-enthusiasm and point out that technologies are just tools that provide benefit only when they are well designed, capable of helping us solve real problems, and ethically used. We have choices when we approach technologies, and we should make them thoughtfully.
Morozov addresses information technologies of all types and critiques them incisively from the perspective of history and a breadth of disciplines. Even such givens as Moore’s Law, which technologists often cite as the basis of their position, is revealed as a failed hypothesis—hardly a law.
Morozov seems to share my concerns about Big Data. Regarding the popular new trend of capturing and storing everything he writes, “Where there is no reflection about what ought to be preserved, the records—no matter how comprehensive—might trigger fewer challenging questions about the relative significance of recorded events; the enormity of the archive might actually conceal that significance.” (p. 278) In opposition to those who fail to see the connection between the technologies of today with the past, he writes:
Contrary to his [David Weinberger of Harvard’s Berkman Center] claim that “knowledge is now property of the network,” knowledge has always been property of the network, as even a cursory look at the first universities of the twelfth century would reveal. Once again, our digital enthusiasts mistake impressive and—yes!—interesting shifts in magnitude and order with the arrival of a new era in which the old rules no longer apply. Or, as one perceptive critic of Weinberger’s oeuvre has noted, he confuses “a shift in network architecture with the onset of networked knowledge per se.” “The Internet” is not a cause of networked knowledge; it is its consequence—an insight lost on most Internet theorists. (p. 38)
Technologists (especially technology vendors) use the term “revolution” much too loosely. What qualifies as revolutionary? Morozov argues that, “In order to be valid, any declaration of yet another technological revolution must meet two criteria: first, it needs to be cognizant of what has happened and been said before, so that the trend it’s claiming as unique is in fact unique; second, it ought to master the contemporary landscape in its entirety—it can’t just cherry-pick facts to suit its thesis.” No recent so-called revolution in technology fails to meet these criteria more severely than Big Data.
I don’t agree entirely with everything that Morozov presents in this book, but at no point did I find his reasoning unsound or uninformed. He has opened my eyes to a few issues that fall outside of my primary spheres of interest, some of which have caused me to lose a little sleep, especially ways in which technological solutionism is influencing politics. While it is true that our political systems can be improved, the notion that we can “ditch politics altogether and hope that technology—especially ‘the Internet’—can rid us of problems that politics can no longer solve or, in a milder version, that we can replace politicians and politics with technocrats and administration” is frightening. (p. 128 and 129) “Fixing politics without first getting a thorough understanding of what it is and what it is for is still a very dangerous undertaking…Political thinking, as well as political morality, needs to be cultivated; it doesn’t occur naturally—not even to geniuses in Silicon Valley.” (p. 139)
Technologies are important. They give us opportunities to extend our reach and improve our world, but they also give us opportunities to do the opposite. Morozov understands this. He is not a Luddite, he’s a responsible technologist. I recommend that you consider what he has to say.
Take care,

|