This article is more than 1 year old

Nvidia drops veil on game-changing might of VGX

VDI'ing Your BYODs

HPC blog What’s a “holy crap” moment? For me, it’s when I see or hear (or do) something that has far-reaching and previously unforeseen consequences. I’ve had at least two of these moments (so far) at the GTC 2012 conference. The first was when Jen-Hsun Huang, in his keynote presentation, tossed up a slide about Kepler and this new thing they’re calling VGX.

VGX is a Kepler device that packages the GPU with components that enable very fast, virtualised, GPU streaming. For a good summary of VGX, Tim Prickett-Morgan, America’s premiere technology journalist (and national treasure), wrote one here.

At a basic level, what VGX brings to the table is desktop virtualisation for the knowledge worker and power user. In some ways, it will give power users more resources than they’ve ever had before. In real terms, it will give them the freedom to truly work (or display their work) on a wide variety of devices from anywhere they have a broadband connection. It looks to me like any device that can handle a H.264 video stream can also handle the stream of any application – regardless of its size, complexity, or need for lots of compute resources.

With VGX the CPU, memory, and storage capability of the device you’re using are irrelevant. All of the heavy lifting takes place in the data center, on beefy servers connected with fast switches. Other than the addition of VGX cards, the rest of the infrastructure doesn’t look all that much different than what we’re familiar with today. There’s a virtualisation layer, provided by VMware or Citrix, which allows lots of desktops to reside on the same physical servers.

The secret sauce and ‘new thing’ here is VGX – and that makes all the difference. Simply put, VGX is supposed to deliver the desktop view – the output from your applications – faster than ever before. Fast enough that it’s virtually the same as what you’d have sitting at your office desk, working on your desktop or laptop.

The reason we haven’t seen VDI extend past task workers is latency. From a user perspective, latency is that lag you feel when your screen doesn’t refresh as quickly as you expect. Latency in VDI occurs at every step of the journey from your device up to the server and back down again. With VGX, according to Nvidia, they attacked latency everywhere they could.

If the live demonstrations at GTC are any indication, they were largely successful. So how successful is “largely successful”? According to figures Nvidia used in an online gaming context (more on that in an upcoming blog), they’ve reduced typical cloud-to-device latency from somewhere around 284 milliseconds down to around 160 milliseconds – a substantial improvement. But what does that mean?

During the keynote, Nvidia demonstrated VGX in a couple of different ways. First, they showed an Apple iPad sporting a spiffy Microsoft Windows 7 desktop. Looking at the system properties, Windows thought it was using a Quadro GPU with 1,536 cores.

Then they opened up an Autodesk session and manipulated the image on the left. The image quality isn’t great, unfortunately.

The image moved smoothly with little discernible lag, given what they were doing. So, under highly favorable keynote address conditions, it worked very well.

The next test was quite a bit more challenging. Grady Cofer, special effects supervisor at Industrial Light & Magic, took the stage to put VGX through its paces. Cofer has worked on a wide range of movies, including recent blockbuster The Avengers and the upcoming action flick Battleship.

He talked about the problems inherent in trying to show directors action scenes that might be as large as 40 Terabytes (yes, Terabytes). Directors, being directors, want to change things around, but unless the director is standing in your cube at ILM, you can’t change things on the fly to show him the effect of his changes on the finished product.

Using the same little iPad, Grady logged into ILM to give us a look at how his people work their industrial magic. In the Hulk shot, he showed us how he can move around and use his desktop editing tools just as he would if he were sitting at his workstation.

He upped the ante in the (very blurry) shot of his upcoming Battleship movie. In this scene, there are ‘grinder’ devices heading toward our battleship. They whirl and glow, moving in from behind the camera position to attack the ship and grind it senseless.

Grady then took us through routine changes he might be asked to make by a director. He changed the color of the glow, added a bit more glow, and even changed the path of a grinder so that it came in at a different angle.

We could see the effect of these changes as he scrubbed along the time line, and then again as he allowed the scene to run at normal speed. To my eyes, at least, it all rendered very quickly and ran just as quickly – pretty much what you’d expect if you were commanding an entire render farm and using it from your own workstation. But he was using a lowly iPad via VGX and a broadband network connection.

The point Nvidia was trying to make was that if this is fast enough for an ultimate power user like Grady Cofer and Industrial Light & Magic, then it’ll probably run our desktop apps just fine. If everything they’re talking about works as advertised, they’re right.

Which would change everything. Or to put it in properly couched analyst-speak, “It has the potential to significantly change the status quo, with respect to desktop virtualisation and device usage patterns.”

I don’t think that this means the absolute end of workstations, beefy PCs, and high-end laptops. There will still be some benefit to having a lot of local power at your fingertips. However, for a very large number of users – even power users – this VDI solution will work very well.

The benefits to internal IT are considerable. If all desktops are streamed, then BYOD (bring your own device) can easily become the new normal. Any device that has a screen and a keyboard, mouse, or some other controller can be used as ‘your own device’ and work with enterprise software from the o/s on up.

End-user support gets much easier, security and access control can be radically improved, and overall costs should drop. Networks might need to be beefed up, although not as much as you might think. It’s my understanding that the only data that’s being moved is what’s in the stream, and the commands coming up from the user. This is a big load when you’re talking about huge numbers of users, but it’s not as much as, say, thousands of users downloading and uploading documents all day long. At least that’s my snap judgment – readers, please chime in with your thoughts.

I don’t think there’s much impact on storage. Sure, you’ll still need a lot of it, but I don’t see why VDI would necessarily require much more. Much of the space that’s devoted to backing up individual desktops and laptops could be repurposed for other needs, since the corporate desktop image would now reside on servers instead of individual user machines.

I’m still trying to wrap my head around all of the implications of VGX. Suffice to say that there are a lot of implications to consider. However, we have some time to think about it. NVIDIA isn’t talking specifics when it comes to general availability, but we’re probably looking at the first half 2013 for the formal rollout.

To me, this is potentially a big game-changer, but what do you think? Am I overlooking something? Let me know in the comments section. ®

More about

TIP US OFF

Send us news


Other stories you might like