VisWeek 2011 – Award-Worthy Visualization Research
On Tuesday in this blog I expressed my frustration with VisWeek’s information visualization research awards process. I don’t want to leave you with the impression, however, that the state of information visualization research is bleak. Each year at VisWeek I find a few gems produced by thoughtful, well-trained information visualization researchers. They identified potentially worthy pursuits and did well-designed research that produced useful results. While puzzling over the criteria that the judges must have used when selecting this year’s best paper, I spent a few minutes considering the criteria that I would use were I a judge, and came up with the following list with points totaling to 100:
Effectiveness (It does what it’s supposed to do and does it well.) — 30 points
Usefulness (What it does addresses real needs in the world.) — 30 points
Breadth (Many people will find it useful.) — 10 points
Applicability (It applies to a wide range of uses.) — 10 points
Innovativeness (It does what it does in a new way.) — 10 points
Technicality (It exhibits technical excellence.) — 10 points
Given more thought, I’m sure I would revise this to some degree, but this gives you an idea of the qualities that I believe should be awarded and the importance of each.
One research paper, among a few, that thrilled me by its elegance and exceptional usefulness was presented yesterday by Michelle Borkin of Harvard University’s School of Engineering and Applied Sciences titled “Evaluations of Artery Visualizations for Heart Disease Diagnosis.”
I’ll allow the abstract that opens the paper to summarize the work:
Heart disease is the number one killer in the United States, and finding indicators of the disease at an early stage is critical for treatment and prevention. In this paper we evaluate visualization techniques that enable the diagnosis of coronary artery disease. A key physical quantity of medical interest is endothelial shear stress (ESS). Low ESS has been associated with sites of lesion formation and rapid progression of disease in the coronary arteries. Having effective visualizations of a patient’s ESS data is vital for the quick and thorough non-invasive evaluation by a cardiologist. We present a task taxonomy for hemodynamics based on a formative user study with domain experts. Based on the results of this study we developed HemoVis, an interactive visualization application for heart disease diagnosis that uses a novel 2D tree diagram representation of coronary artery trees. We present the results of a formal quantitative user study with domain experts that evaluate the effect of 2D versus 3D artery representations and of color maps on identifying regions of low ESS. We show statistically significant results demonstrating that our 2D visualizations are more accurate and efficient than 3D representations, and that a perceptually appropriate color map leads to fewer diagnostic mistakes than a rainbow color map.
What you see on the left side of the image above is the conventional rainbow-colored 3D visualization that most cardiologist’s rely on today. Although this visualization accurately represents the physical structure of coronary arteries, this is not necessarily the best view for identifying areas of low ESS. One of the problems is occlusion, which forces cardiologists to rotate the view so it can be seen from all perspectives, which is time-consuming and prone to error. The use of rainbow colors as a heat map to represent a range of quantitative values from low to high ESS is another significant problem that leads to inefficiency and error.
Michelle Borkin eloquently told the story of how she and her colleagues, an interdisciplinary team that included medical experts, worked through the process of designing, testing, and improving HemoVis. Part of the story that fascinated me was the way that they dealt with the expectations and biases of cardiologists, formed by their training and experience with existing systems. People often develop strong preferences for visualizations that perform poorly, merely because they are familiar or superficially attractive. Opening them to other possibilities can be challenging. Because the team worked so well with the cardiologists and because they did fine work with benefits that could be demonstrated, they managed to loosen the cardiologists’ hold on the familiar and open them to a solution that worked much better. When adoption of a new system results in lives being saved, this is a great success.
I don’t want to describe HemoVis in detail, because I want you to read the paper and fully appreciate the beauty of this work. I do want to mention a few features, however, that illustrate the design’s excellence. As you can see in the HemoVis screen below, the coronary arteries are arranged as a simple tree structure rather than according to their actual physical layout. There are three major branches and sub-branches off of them. To make the interior walls of the arteries entirely visible at a glance, they have been opened up and flattened, much as cardiologists sometimes “butterfly” an artery. The width of the artery representation varies in relation to the circumference of artery. A diverging color scale with gray for the low range of risk and red for the high range of risk worked dramatically better than the rainbow scale of conventional images. Just as maps of London’s metro system are easier to use by commuters when the lines are arranged differently than actual geography, this rearrangement of the coronary arteries and simplified color scale perfectly supports the task of spotting the locations of low EST. Form supporting function this effectively is a thing of beauty. Tests involving cardiologists demonstrated that HemoVis required little training and resulted in a significant increase in the number of risk areas that were identified, the elimination of false positives, and a dramatic reduction in the amount of time that was needed to complete the task. In other words, HemoVis has the potential of saving lives.
Follow this link to download the paper to see an exemplar of fine information visualization research. Several other information visualization papers that have been presented at VisWeek this year exhibit fine work as well (including all of the infovis papers submitted by Stanford), but this one in particular touched my heart. (Yes, the pun was intended.)
Take care,
6 Comments on “VisWeek 2011 – Award-Worthy Visualization Research”
thanks so much for that. A lot of fantastic things are happening in infovis research, yet from your previous comments your readers could get an overall negative impression both on research and on practitioners. I appreciate your views and know that you stand your ground firmly on what you believe is right, yet for new concepts of research to permeate the larger “visual community” I think it is important that this research appear in a somewhat positive light.
So I am very grateful that you take meriting papers or accomplishments such as this one and explain why they are good.
I would also be curious to have your views on Jessica Hullman’s presentations, especially on “benefitting infovis with visual difficulties” – I mean, even if you don’t like it.
all the best,
Jerome
Jerome,
I actually think it’s appropriate to give the impression that most infovis research is mediocre at best. This is sad, but true. The wonderful work that’s being done by some, which I appreciate and promote, is the exception rather than the norm. I critique poor research so we can learn from our mistakes and strive to raise the overall quality of infovis research.
Regarding the paper “Benefiting Infovis with Visual Difficulties,†which was one of only two infovis papers that received an honorable mention at VisWeek this year, I will publish a thorough review of this work soon, either in my blog or as my next newsletter article. This paper made provocative claims, none of which are justified. It is actually worse in quality than the paper that was presented last year about the benefits of chartjunk, which I reviewed in my newsletter earlier this year. Stay tuned.
Stephen,
I am an avid reader of your blog, have purchased all your books and I am trying very hard to see you “live” at one of your conferences but logistics get in the way of actually meeting you face to face.
Despite all the research, the care you take in promoting data visualization as a discipline in its own right, and most of the content in your books, blogs and articles, what entices me the most in reading your material is the honest, no nonsense way you tackle each and every opportunity to challenge, critique (not critize), and improve on existing designs.
And that is exactly why I am writing again here in this blog, specifically on the infovis blog, because I have received an email from you on your critique (which at times bordered on bashing) of a research paper created by Bateman, et al.
Stephen has much time as you spend on advocating proper data visualization which has as a main goal the proper communication of facts into valid information, also spend some time in communicating properly with other human beings. Unline numbers and charts, human beings are a bit ore delicate to deal with and require a much more finer approach such as culture, language, writing style, body posture, and hundreds of other small factors, which can affect how your message is received.
Now, if I was the academic body responsible for letting that paper be published or the authors of such a paper, I would make it my career to make sure that everything you wrote, said or published would be scrutinized to the maximum – in effect create something of an academia war. Which is truly a waste of time and would detriment the advancement of data visualization even more.
But more important to me is that after reading all your books, most of your blogs and newsletters, I have yet to see you creating a “real life” dashboard or report rather than critiquing other people’s work. The examples on your books are simpistic at best based on proper, clean data, ready to be visualized. It is based on a hypothetical company with hypothetical business information requirements.
There is nothing close to arriving at a client’s office and be swampped with business and user requirements on how each user group wants to see their data, what data should be visible to them and what data needs to be hidden, how to handle requirements such as I would like to se a full dashboard of my sales team on my mboile phone with a real estate space of 400 pixels by 300 pixels.
Now that would be a great book!
Telmo
Telmo,
It might not be obvious, unless you’re involved in the infovis research community, the great degree to which critique is desperately needed. This field of research is still young and emerging, stumbling to find its way in the world. I try to praise good work in the field and explain why it is good, as I’ve done in this particular blog post, and try to expose the flaws of poor work, and explain why it is bad. I don’t “bash.†I take great care when critiquing work to do so in a fair, accurate, and substantive manner that promotes learning and improvement. When I critique the work of scholars (graduate students and professors) or professionals in the field, I am direct, and feel no need to preface every statement with a polite phrase. I try to tell the truth as I see it as clearly as possible. This approach is built deeply into my personality and character, which people sometimes find annoying.
Regarding the graphs, dashboards, etc., that appear in my books and other written work, I design them to teach particular lessons. I usually keep them simple and focused on the point that I’m trying to illustrate. It is not true, however, which you should know as an “avid reader†of my work, that it does not contain many complex, real-world examples. When they’re needed, I provide them. What you don’t see are the many reports, dashboards, presentations, etc., that I have helped clients create, most of which are for internal use only. The dashboards that appear in the last chapter of Information Dashboard Design are based on real-world data and were much more complex than any dashboards that I had seen when they were created. Even today, several years later, they are still much more information rich than any dashboards that I’ve seen since, except for those that were created based on the principles that I taught in the book. It is true that this book would be improved by more well-designed, real-world examples of dashboards, which I will add when I write a second edition next year. Stay tuned.
Hi Stephen,
Understood. However I still believe that to proper foster, shape and expand the field, care must be taken as to not destroy ideas, concepts and theories too early in their life cycle. A strong critique from someone with your reputation in the field could undermine future careers, ideas and concepts forever. Some of those ideas and concepts could have value in the future if followed through to their completion (even if initially they are based on a false foundation).
As for the real world examples, I believe I was venting more a frustration by the lack of material exposing real life implementations of dashboards and reports in common industries. Looking forward to the second edition and future books and articles.
Hey, Steve–
Thank you for highlighting this important work. One of the most compelling aspects of it is the reduction of false positives. Clearly this is vitally important for the delivery of high quality patient care–i.e., not subjecting people to interventions and procedures they don’t need and which may cause more harm than benefit– as well as the resultant reduction in the cost of care. These are two factors which will go a long way to helping this research find its way into accepted practice by the medical community–problems that most research fails to successfully overcome, thereby relegating it to the fifth dimension of total obscurity. Thanks for your review and for bringing this work to our attention. Kathy