The Incompatible Marriage of Data Visualization and VR
Every once in a while, someone claims that data visualization can be enhanced when viewed in virtual reality (e.g., by wearing a VR headset). Not once, however, has anyone demonstrated any real benefits. If you think about it for a moment, this isn’t surprising. How could viewing a chart in VR possibly work any better than viewing it on a flat screen? The chart would be the same and VR doesn’t alter visual perception; it merely gives us the ability to navigate through a virtual world. Whether viewing the real world (including a flat screen) or a virtual world, our eyes work the same. VR is useful for some applications, but apparently not for data visualization.
VR only gives us a different way of changing the perspective from which we view a chart if the chart is three dimensional. If the chart is two dimensional, whether viewing the chart on a flat screen in the real world or in the virtual world, we would view it straight on. Viewing it from any other perspective (e.g., from the side or behind) would always be less useful than straight on. If the chart is three dimensional, however, in addition to moving the chart around to view it from different perspectives as we would when using a flat screen (e.g., by rotating it), in VR we could also virtually move ourselves around the chart to change perspectives, such as by virtually walking around to view it from behind. Does this offer an advantage? It does not. What we see once we shift perspectives is the same, no matter how we get there.
VR does offer one other navigational possibility. While wearing a VR headset, we could virtually position ourselves within the chart, among the data. Imagine yourself in the midst of a 3-D scatter plot, with data points all around you. Would this offer a better view? Quite the opposite, actually. It would cause us to become metaphorically lost in the forest, among the trees. How much of the data could we see if we were located in the midst of it? Very little at any one time. To see it all, we would need to turn back and forth to see only bits of it at a time. Much of a chart’s power is derived from seeing all of the data at once. What might seem cool about VR navigation on the surface would be dysfunctional in actual practice when viewing data visualizations.
Before moving on, I should mention the general uselessness in all but rare cases of 3-D charts that encode a variable along the Z axis. The Z axis simulates depth perception, but unlike 2-D positions along the X axis (horizontal positions) and the Y axis (vertical positions), which human visual perception handles exceptionally well, our perception of depth is relatively poor. It is for this reason that vision scientists sometimes refer to human visual perception as 2.5 dimensional rather than 3 dimensional. 3-D charts suffer from many problems, which is why the best data visualization products avoid them altogether. Viewing 3-D charts in VR solves none of these problems.
I was prompted to write about this in response to a recent press release about a new product named Immersion Analytics by the company Virtual Cove. The company claims to provide several patents-pending features that enhance data visualization through VR. When I read the press release, being the suspicious guy that I am, I suspected yet another false claim about the benefits of VR, but I was more than willing to take a look. What I found, as expected, was pure nonsense. I’ve examined every example of an Immersion Analytics data visualization that I could find and observed nothing that would work any better when viewed in VR rather than on a flat screen. During a promotional presentation that’s available on YouTube, the company’s founder, Bob Levy, who “invented†the product, listed three visual attributes, as examples, that we can supposedly view more effectively in VR: Z position, glow, and translucency. I’ve already explained how Z position is just as useless in VR as it is on a flat screen, but what about the other two attributes? By glow, Levy is referring to a halo effect around an object (e.g., around a bubble in a 3-D bubble plot) that varies in brightness. You can see this effect in the example below.
Notice how poorly this effect works as a means of encoding quantitative values. Can you determine any of the values? I certainly can’t. Nor can we compare halo intensities to a useful degree. How could this possibly work any better in VR? VR doesn’t enhance visual perception. Our eyes work the same whether we view a chart on a flat screen or in VR. The remaining attribute—translucency—is no different. What Levy means by translucency (a.k.a., transparency) is the ability to see through an object, like looking through glass. Varying the degree to which something is translucent is also illustrated in the example above: the bubbles are translucent to varying degrees. Can we decode the values represented by their translucency? We cannot. Can we compare the varying degrees to which bubbles are translucent? Not well enough for data sensemaking. During the presentation, Levy claimed that if we could view this chart while wearing a VR headset rather than on a flat screen, translucency would work much better. That is a false claim. Our perception of translucency would not be changed by VR, and it certainly wouldn’t be enhanced.
Based on the examples that I reviewed, my suspicions about the product’s claims seemed justified. When I contacted Andrew Shepherd, Virtual Cove’s VP of Strategic Growth who functions as their media contact, to ask several questions about the product, I honestly admitted my skepticism about their claims. In response, he wrote, “I know it could be a long shot, but it would be a thrill to convert you from a skeptic into a true believer.†Definitely a long shot, but I nevertheless offered to examine the product in VR if they would loan me the necessary equipment. Perhaps you won’t be surprised to hear that they don’t have any VR headsets available to loan to skeptics. Faced with no possible way to evaluate their VR claims directly, I asked Shepherd a simple question: “What can be seen in VR that cannot be seen just as well on a flat screen of equal resolution?†I’m still waiting for an answer. I’ve found that Shepherd quickly responds to questions that he likes, but persistently ignores those that are inconvenient. I am still quite willing to be surprised by VR-enhanced capabilities that contradict everything I know about visual perception, but I’m not holding my breath.
If a vendor tries to sell you VR data visualization software, I suggest that you either ignore them altogether or do what I’ve done—ask them to justify their claims with clear explanations and actual evidence—then be prepared to wait a very long time.
14 Comments on “The Incompatible Marriage of Data Visualization and VR”
I’m as skeptical as you are—but I’m open to keep trying to see what can be gained with VR for data visualization.
Where I *do* see applications of VR is in scientific visualization and pictorial representations in general. I’ve tried several hi-res simulations of, say, human organs and virtual spaces, and they are impressive as educational tools; my guess is that there’s a lot of potential in this type of applications.
Alberto,
As I mentioned, VR certainly has its applications. The examples that you described are excellent uses of VR, but they’re not data visualizations. The relevant question is, have you ever seen a data visualization that was enhanced in any way by VR? If you or anyone else reading this blog has, please share it with us.
While I agree, your argument is based in using current charts in VR. It is at least conceivable, however unlikely, that someone could create a completely new chart to that is only possible in VR. On a brief tangent to Augmented Reality, it’s easy to imagine an environment where a surgeon could see a live view of the heart with valve and vessel data overlaying each in context. Returning to VR, the same could evolve into an incredible training tool. The key in both of those is data in context which I could imagine being incredibly valuable. To your point, displaying standard charts in VR is pointless, if not detrimental. However, I hold open the possibility of new charts and the display of existing charts in context.
Tom,
It is possible that someone could come up with an approach to data visualization that takes advantage of VR. No one has done this yet, however. It is certainly not the approach that Virtual Cove has taken. If someone invents a new approach to data visualization that actually derives benefit from VR, I’ll embrace and applaud it.
We have been doing this research at work as well in both the VR and AR space. Results have been very disappointing. Even in the cyber security space looking at large graphs it has shown little to no tangible improvments.
Thanks Graig. I’m curious. You mentioned that your work has produced “little to no tangible improvements.” Have you uncovered actual evidence of any improvements whatsoever related to VR when applied to data visualization? If so, I’m very interested in hearing about it.
Stephen,
I think the biggest tangible benefit we have found is the ability to collaborate from remote locations as though we were all co- located.
I am not saying you are not able to do that in 2D information visualizations, but the ability to do so in a “unconstrained” virtual environment allows different and new interaction paradigms. Specifically multi user concurrent interactions.
Graig,
That’s interesting. So you’re saying that VR doesn’t enhance what you can see in a data visualization or how you can interact with it, but it does potentially enhance the interaction among people who are viewing and interacting with a data visualization together. That makes sense, potentially, but only if the virtual world does a great job of simulating the real world.
When I work with a small group of people to view and interact with data visualizations on a screen together, being co-located in that room offers several real advantages. For example, I can use my finger to point to something on the screen and everyone knows immediately what I’m referring to. When I talk with people, I can look them in the eyes and read their expressions. If Mary wants to show us something by interacting with the visualization, such as by filtering the data, I can pass her the mouse and she can do it directly. If John wants to take over for the moment, he can let us know that in a natural way.
I’m trying to imagine how this fluid and natural interaction would translate to a virtual world. To help me understand, perhaps you could answer a few questions:
1) How are people represented in the virtual world? As avatars?
2) How do you initiate and control your movements in the virtual world?
3) Can you convincingly make eye contact with others in the virtual world?
4) How are physical boundaries simulated in the virtual world to prevent people from doing things that would be disorienting, such as walking through one another?
These are my questions for now.
Those are the things we are looking at now. How to make the interactions more seamless and feel more natural.
1) just as cursors (controllers really) currently. We aren’t trying to “live inside the visualization” just interact with it.
2) We have most recently been using the HTC Vive and its hand controllers. They are surprisingly natural feeling. Once you map out thre virtual world the controls just become an extension of your hands.
3) Aren’t really looking at this because we aren’t trying to represent us as a person entity in the space. I can see that being a barrier, for the reasons you mentioned above.
4) Again haven’t worried about the walking through people issue because that isn’t how our interactions work. In terms of physical barriers from real world to virtual world they are mapped into the virtual space through hashed indicators. Walking into a real person isn’t too big of an issue as you know where they are based on their hand controls being represented in virtual space.
I am not claiming anything we have done or discovered is a huge win for viz in VR or AR, but I think with time the possibilities are there.
We have additionally looked at training in VR space which seems to be producing some good results.
You present a number of convincing points on the inherent limitations of VR and data visualization. Consider me a fellow skeptic, but I think there is much greater potential in AR/MR applications.
“What can be seen in VR that cannot be seen just as well on a flat screen of equal resolution?†Simple. How many screens do you need to visualize 10-20+ charts in an easy way? How many thousands or tens of thousands of dollars do they cost? You can buy a VR headset and be in a complete virtual environment, surrounded by dozens of charts – and can easily switch between 2D and 3D representations when it makes sense. Plus, you can move them around easily, group them and filter them, and interact with them in a natural way.
In addition, 3D does benefit from VR. You sound like you have never used VR. You can move around the 3D object, so the occlusion problem is much less significant. You can “walk” through the data. You can “become the data”, by changing perspective and looking around from a particular data point.
Plus, there is also value in collaboration – where multiple people can see and interact with, the same data.
In addition, one can also include simple prototyping tools that help create new charts during virtual meetings easily, saved directly in a digital format. Thus voiding, the paper sketch to digital conversion that would otherwise occur.
Hello Dan,
I’ll address each of your points in turn.
Point #1: You would need many computer screens to display what could be seen with a VR headset.
Actually, you would need only one screen to see what you could see with a VR headset. At any one moment we can only see with clarity what we’re looking at directly. This is the same whether we’re viewing content on a flat screen or with a VR headset. The only difference is how we shift our view. With a VR headset, we shift our view by moving our head. With a single flat screen, we shift our view by changing what appears on the screen (e.g., by scrolling or using one of many means to navigate from one view to another). If we were playing a virtual reality video game, there might be an advantage to using a VR headset, for it would better simulate how we navigate the physical world. This advantage does not extend to data visualization, which view abstract data that doesn’t exist in the physical world.
Point #2: VR allows you to move around 3D objects to solve the problem of occlusion.
I’m not particularly inclined to address the comparative benefits of viewing pseudo-3-D displays using flat screens versus VR headsets because 3-D displays offer no advantages for data visualization no matter how you view them. Virtually moving around in the midst of a chart (for example, imagining yourself in the midst of a 3-D scatter plot) is not useful no matter what system we use. We would see less while in the midst of the chart than we would while viewing it from the outside. We certainly don’t need a VR headset to address problems of occlusion because we can shift our point of view just as easily while viewing a 3-D visualization on a flat screen.
Point #3: A VR headset allows multiple people to view and interact with data at the same time.
Multiple people can view the same data using flat screens just as well. What can be viewed while each person is wearing a VR headset can also be viewed while each person is looking at his or her own flat screen. In both cases, the software interface must allow multiple people to view the same content simultaneously.
Point #4: A VR headset allows multiple people to create new charts together.
Multiple people can view the same screen’s worth of content and co-create the same chart while working on individual flat screens just as well as they can while wearing individual VR headsets. In both cases, the software interface must allow multiple people to manipulate the content.
If you don’t understand or agree with any of my responses, I invite you to continue the discussion.
I suppose the VR allows a work around of poorly presented data. Zooming in, drilling down/in is already a feature of various BI tools, VR just adds sparkly lights.
For instance, I could be looking at solar panels on a roof. With VR/AV an overlay of the efficiency of each could highlight which ones are underperforming. This *might* just present a pattern that may not be as obvious as looking at a bunch of lines. Espically if a real-world object is casting unexpected shadows.
But then again, a heat map would be just as good, rather than creating a strawman argument for VR toys.
Steve,
VR (virtual reality) and AR (augmented reality) should not be confused. They are quite different. The example that you gave involving solar panels is an example of AR, not VR.