AMD: Star Trek holodecks within reach
Exec explains heterogeneous computing to Geordi La Forge
ISSCC Twenty-five years ago, Star Trek: The Next Generation introduced the holodeck, a chamber aboard the USS Enterprise NCC-1701-D that could transform itself into any environment. In a decade or two, however, that sci-fi fantasy could be real.
So said the general manager of AMD's global business units, Lisa Su, speaking at the International Solid State Circuits Conference (ISSCC) in San Francisco on Monday morning.
To emphasize the fact that science fiction devices such as the holodeck can, indeed, move from fantasy to reality, she invited actor LeVar Burton, who played lieutenant commander Geordi La Forge in TNG, to join her onstage.
"We've gone from the flip-phone as a result of the [Star Trek] Communicator," Burton said. "We've got the Bluetooth ear device as a result of Lieutenant Uhura's communication device. My goodness, on the Enterprise we carried pads, and now we have got tablet computers."
So why not a holodeck today? Well, Su said, one obvious roadblock is the high computational requirement for the 360-degree imaging, directional audio, natural user interface, contextual computing, and so on that would be required to create a room that could change itself into, well, into anything,
According to Su, the technology that will enable that processing load is one that AMD has been promoting since at least 2009: heterogenous system architecture (HSA).
As a member of the HSA Foundation along with such worthies as ARM, Imagination, Texas Instruments, LG, Qualcomm, and others, AMD is working towards the day when CPUs, GPUs, and specialized accelerators can share and work with data residing in a shared memory architecture, and be transparently programmed to accomplish the tasks most suitable to them by using such popular languages as C, C++, and Java.

Geordi La Forge aka LeVar Burton aka Fram from Planet Mopar
"What we're really trying to do is have heterogeneous systems really become the foundation of our computing going forward," Su said. "And that's the idea that you make every processor and every accelerator a peer processor. So in other words, you want to be able to use high-level programming languages like C, C++, and Java; get lots of programmers able to use all of the hardware that we can put on a chip; and ensure that we're able to transition between all of these computing elements in a very seamless way."
To accomplish that seamlessness, a shared memory architecture will be critical – and it's certainly no coincidence that such an architecture is planned for the next iteration of AMD's APUs, Kaveri, planned for release in the second half of this year.
Su described her company's current APUs – code-named Llano and Trinity – as the first generation of integration: the CPU and GPU are on the same chip and they have the same memory controller, but they must move data back and forth to each other over the memory bus. "They're really bottlenecked by that bus," she said.
Kaveri will be the next step towards HSA integration. "The next generation of heterogeneous systems really looks at a much more 'system' view of the world," Su said. "This is where you can put CPUs, other HSA computing units like audio acceleration as well as graphics units together on-chip where they have unified, coherent memory."
Doing so will enable all of the compute and acceleration units to access all the data at a high throughput rate with minimal latency, and in much smaller chunks. The generation after that, Su said, will add "more sophistication in the graphics unit, including compute context switching and some graphics preemption."
That's the hardware side. "As a hardware person," she said, "I can say that putting the hardware together is not so hard. Getting the entire software ecosystem to come together is a harder challenge."
To do that, one goal is to develop the HSA intermediate language – HSAIL – to allow programmers familiar with such high-level programming languages as Java, C, and C++ to take advantage of HSA's capabilities.
"This is a big undertaking," Su said. "It requires a lot of cross-collaboration between hardware, software ecosystem, and all the industry standards that have to come around it. But if you think about what we could get with that – the idea that you could write your software once and run it anywhere."
And one of those anywheres, Su says, could be a holodeck. From her point of view, HSA could greatly assist holodeck-supportive algorithms such as RANSAC for computational photography, VAD for directional audio, a Markov model for improved speech recognition, and Haar face detection – that last one, Su said, being a particularly good example of an algorithm that can benefit from the ability to use the CPU when it's best suited to one of its tasks, and the GPU when its architecture is more suitable.
Although Su is convinced that the creation of a functioning holodeck is simply a matter of time, she wouldn't project a date certain as to when it might appear. "Maybe it's 10 years out, maybe it's 15 years out, maybe it's 20 years out," she said.
But she was quite certain about one thing: that simple, linear computing has taken us a far as it can go. "I think it's fair to say that the age of traditional computing is dead," said AMD's HSA champion. ®
COMMENTS
we don't need *that* much processing power
Our impression that we can see everything in front of us all at once is a trick played on us by our brain, as demonstrated by the admirably-titled "Gorillas in our midst". The trick will be to work out what we are really looking at in any given fraction of a second and render thoroughly in that +/-2 degrees or so; less precision from there to +/-10; and beyond that everything else can remain rather fuzzy and we'd never know.
Holodecks aren't just about processing power
Sure, we might be able to recreate the visuals and audio within a decade or so, but that's a long way from what the ST:NG holodecks were depicted as being capable of. Tactile and olfactory simulation being the biggest challenges here; how to simulate the roughness and solidity of a rock, or the furriness of a cat, or the softness of a comfy armchair? Then there's smell and taste, which are notoriously difficult to replicate, let alone simulate.
I can see "360-degree viewing rooms" emerging within the next 10 or so years: I imagine a cuboidal room, perhaps the size of a toilet cubicle, whose walls and ceiling consist entirely of hi-res monitors to create the effect of you being inside a "glass box" within a virtual environment. For real authenticity these monitors would need to be capable of parallactic 3D (not the simple stereoscopic pseudo-3D of today's TVs) so if you move your head to see around a nearby tree, for example, what's behind the tree comes into view. And that technology is quite a long way off yet, as it requires in-situ 3D positional mapping of every object in the scene, although the computational power discussed in the article certainly makes running this kind of display feasible.
Re: Holodecks aren't just about processing power
I agree that of course the holodecks are just a plot device, but the whole question about do they move around in the holodecks has been 'explained' many times...
They don't actually stand on the floor, they're actually suspended a millimeter or two off it on a forcefield, and when they walk they just get the sensation of walking, and the 'forcefields and photons' move around them.
A great demonstration of this is the episode of Voyager when Torres is trying to kill herself, and she leaves the holodeck part way through a skydive. (I don't remember the episode name... I'm not THAT geeky :-)
Re: Holodecks aren't just about processing power
No, if intel are making them you'll have to watch a 3d "intel inside" jingle before it starts generating you room... then AV vendors will start paying for intel to but holographic AV crapware in your holodeck... and then everything will turn blue when the american built piece of shit crashes.
Holodeck != VR
The title suggested that AMD might be sitting on some breakthrough in;
Holography,
Matter Transportation (i.e Teleportation),
Matter Replication
Inertial Field Manipulation (with gravity or general field manipulation to allow for sky diving etc).
All these technologies were required for the Holodeck to work (it wasnt even a deck it was a room).
But no, they are promoting more processing power, I suggest that by the time we figure out one of the above technologies we will have more than enough compute power, computer power is the last thing we need to worry about here.
