This article is more than 1 year old

AR and VR in data visualisation – can it ever be useful to our puny human minds?

'The biggest challenge is to not misrepresent the data'

Great, a floating orange blob. So what?

Well, things get technical quite quickly and reflect the emerging nature of this field.

Nobody quite knows what type of input will win out. Hololens can be controlled by hand gestures, a simple clicker or an Xbox controller, while the Rift and Vive VR systems have their own hand controllers.

Senft has been experimenting with hand gesture tracking technology from Leap Motion, which can be tacked on to headsets, and, in the absence of a keyboard, voice commands too. "There are no standards. The hardware is evolving very quickly, so it's a moving target," Senft says.

In terms of development tools, creators for the Hololens need to use Microsoft's Visual Studio; the company recommends using the game engine Unity, which Peters says handled his unusual 3D graphs "beautifully". VR developers are also experimenting with Unity, although it has its critics. Burden points out that as a game engine, it's optimised to display relatively few, complex shapes, whereas data scientists want more but simpler points in view. It was also a struggle to get it to display things like data tables, he says, so in the end he switched to Windows.

These may be merely the teething problems of an emerging hardware platform. But others have identified a more fundamental problem: by dropping us into a space where some points are closer than others, VR might even distort our understanding of the data.

"The biggest challenge is to not misrepresent the data, because of the angle you're viewing data from," explains Becca Wilson, a senior research fellow at Newcastle University who was part of a recent project to visualise biomedical data in VR for the Wellcome Trust.

From a particular vantage point inside a 3D scatter graph, for example, "you wouldn't see a pattern, and from another, you would," she says. It's an issue that doesn't come up when looking at data in 2D.

"Things can be hidden... an outlier can be ducking behind [another point]," agrees Peters. This projection problem – which is also an issue with a 3D graph on a screen, as well as VR – means that users must be able to walk around the data, he says, so they don't experience just one viewpoint.

There is also a question mark over how much information our brains can absorb from VR visualisations. For all the fantasies of multicoloured, humming, spinning shapes that allow us to see in ten dimensions at once, "there is a limit on how much a human can interpret," says Wilson. "That's actually the problem: we can't interpret multidimensional data when it goes beyond a certain number of dimensions."

How many is too many? Datascape can visualise up to 13 – colour, shape, X, Y and Z axis and so on – "but realistically, the chance of making sense of that is next to zero," says Burden. Beyond about six variables, it just becomes too complex to take in, he says. Instead, these variables are often doubled up to improve comprehension, so that size and colour might represent the same thing, he explains.

For now, no one expects a single big player to steam in and create the default VR or AR data visualisation platform. Neither are the big players giving much away, beyond tightly controlled and gadget press-friendly news announcements. Microsoft declined a request for an interview about its plans for Hololens. Both Oculus and HTC were equally tight-lipped.

This is an embryonic field and only when these companies' new platforms are in more hands will it become clear where VR and AR genuinely makes data easier to comprehend, where the market follows – and critical mass builds – and where the technology is just being used for its own sake. ®

More about

TIP US OFF

Send us news


Other stories you might like