Feeds

Graphics shocker: Nvidia virtualizes Kepler GPUs

VGX revs virty desktops, fluffs gamy clouds, changes everything

Maximizing your infrastructure through virtualization

GTC 2012 You game-console makers who still want to be in the hardware business, look out. You console makers who don't want to be in the hardware business (this might mean you, Microsoft), you can all breathe a sigh of relief: after a five-year effort,­ Nvidia is adding graphics virtualization to its latest "Kepler" line of GPUs.

The Kepler GPUs, previewed in the GeForce line back in March, are the stars of the GPU Technical Conference that Nvidia is hosting this week in San José, California – but some of the most interesting news about them was kept under wraps until Tuesday's keynote by Nvidia cofounder and CEO Jen-Hsun Huang.

The feeds and speeds of the GeForce discrete graphics cards for desktops and laptops and the Tesla K10 coprocessors are now out there, but for his GTC keynote Huang revealed VGX, a set of extensions to the GPU architecture that allow for a GPU to be virtualized so it can be shared by multiple client devices over a private network, or render images remotely and stream them down from VDI or gaming clouds.

Huang said that more than five years ago, Nvidia's engineering team "just started dreaming" about cool things they might do, and decided then and there that they wanted to take GPUs into the cloud, both for remote graphics and remote computation.

The computation part is relatively easy, but the funny thing about graphics cards rendering images is that they just don't like to share. And, more importantly, they're chock full of state that needs to be managed by whatever virtualization layer (called a hypervisor for GPUs just as it is for CPUs) that manages the carved-up bits of the GPU.

During the Q&A session following his keynote introduction of VGX feature, Huang explained that because graphics chips have so much pipelining and so many registers, as well as so many cores and threads, are "inherently unfriendly" to being diced and sliced and virtualized.

A CPU, by comparison, might have tens of threads and a bunch of registers with kilobytes of data that a hypervisor has to juggle, but a GPU – such as the Kepler chip – has many thousands of threads and many megabytes of state data from all of those threads that have to be managed.

Simply put, GPU virtualization is many orders of magnitude more complex than CPU virtualization.

Schematic diagram of Nvidia's VGX GPU virtualization

Schematic diagram of Nvidia's VGX GPU virtualization (photo: Dan Olds – click to enlarge).

Nonetheless, Nvidia has figured out how to build a VGX GPU hypervisor that integrates with XenServer from Citrix Systems and allows for a Kepler GPU to be carved up into as many as 256 virtual GPUs. These virtual GPUs can be tied to a specific virtual machine running on a hypervisor on a server and managed just like a real GPU, with the ability to allocate more CUDA cores to a virtual PC or server image for its graphics needs.

VGX is not just about cutting up a big GPU so it can be used by multiple virtual PC images in a virtual desktop infrastructure (VDI) setup. VGX also allows for the Kepler GPU sliced up into virtual GPUs that can see which VM is asking for what stream, and render its frame buffer directly to that VM instead of going through the CPU.

Huang explained that current VDI implementations such as those based on the Receiver client from Citrix Systems have a "software GPU" that gets in the way. His approach puts a hypervisor with multiple virtual GPUs on the backend server, in this case running XenServer, and you can get the software-based GPU out of the loop.

Block diagram of Nvidia' Kepler VGX GPU virtualization

Block diagram of Nvidia' Kepler VGX GPU virtualization (photo: Dan Olds – click to enlarge)

Huang says that the Kepler GPU could support as many as 256 virtual GPUs, and he conceded that people might want to take this virtualized CPU-GPU stack and install it on their home PCs or office PCs so they could be accessed from any outside device such as another PC, a smartphone, or a tablet, and still offer the same experience and functionality as the home device.

In effect, your PC can become your personal cloud, rendering your applications remotely.

To get the VGX ball rolling, Nvidia has cooked up something called a VGX board, which has four Kepler GPUs with 192 cores each – that's a mostly dud chip, considering that a standard Kepler GPU has 1,536 cores – and 16GB of memory that is carved up into four segments and used as a frame buffer for each Kepler GPU.

This card, which is passively cooled and designed to slide into servers like the Tesla GPU coprocessors, is able to provide virtual GPUs for about 100 virty PCs, or 25 per GPU on the card. There's no word yet on when these VGX cards are going to be available or what they'll cost, but presumably they will be less expensive than a full-on Kepler GeForce or Tesla card.

Nvidia is partnering with the key server makers, plus Microsoft, VMware, and Citrix for hypervisors and for end-user virty client software to support the VGX hardware and hypervisor.

Citrix, which is arguably today's VDI leader, seems to have the VGX pole position. But the point of VGX is not to be tied to any particular hypervisor or client device, Huang explained. You don't need anything but a bit of software like Receiver (which is free) or the VMware or Microsoft analogues for VDI, and for cloudy gaming all you need is a decoder that is compliant with the H.264/MPEG-4 standard on your client device.

The Power of One eBook: Top reasons to choose HP BladeSystem

Next page: Heavenly gaming

More from The Register

next story
Sysadmin Day 2014: Quick, there's still time to get the beers in
He walked over the broken glass, killed the thugs... and er... reconnected the cables*
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
SHOCK and AWS: The fall of Amazon's deflationary cloud
Just as Jeff Bezos did to books and CDs, Amazon's rivals are now doing to it
BlackBerry: Toss the server, mate... BES is in the CLOUD now
BlackBerry Enterprise Services takes aim at SMEs - but there's a catch
The triumph of VVOL: Everyone's jumping into bed with VMware
'Bandwagon'? Yes, we're on it and so what, say big dogs
Carbon tax repeal won't see data centre operators cut prices
Rackspace says electricity isn't a major cost, Equinix promises 'no levy'
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Application security programs and practises
Follow a few strategies and your organization can gain the full benefits of open source and the cloud without compromising the security of your applications.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Securing Web Applications Made Simple and Scalable
Learn how automated security testing can provide a simple and scalable way to protect your web applications.