Feeds

Nvidia reveals CUDA 6, joins CPU-GPU shared memory party

Tesla headman: 'Biggest pain point' for developers – memory management – is now history

Maximizing your infrastructure through virtualization

Nvidia has announced the latest version of its GPU programming language, CUDA 6, which adds a "Unified Memory" capability that, as its name implies, relieves programmers from the trials and tribulations of having to manually copy data back and forth between separate CPU and GPU memory spaces.

CUDA 6 Unified Memory schematic

CUDA 6 fools a system's CPU and GPU into thinking they're dipping into the same shared memory bank

"Programmers have always found it hard to program GPUs," Sumit Gupta, the general manager of Nvidia's HPC-focused Tesla biz told The Reg, "and one of the biggest reasons for that – in fact, this is the reason – has been that there were always two memory spaces: the CPU and its memory, and the GPU and its own memory."

Being software, CUDA of course does nothing to physically unite those two memory spaces – the CPU still has its own memory and the GPU has its own chunk. To a programmer using CUDA 6, however, that distinction disappears: all the memory access, delivery, and management goes "underneath the covers," to borrow the phrase Oracle's Nandini Ramani used to describe Java 8's approach to parallel programming at this week's AMD developer conference, APU13.

From the point of view of the developer using CUDA 6, the memory spaces of the CPU and GPU might as well be physically one and the same. "The developer now can just operate on the data," Gupta says.

In other words, if a dev wants to add A to B, and A is in the CPU memory while B is in the GPU memory, the newly lucky dev can now just say "add A to B," and not give a fig about where either bit of data resides – the underlying CUDA 6 plumbing will take care of accessing A and B and munging them together.

The 'super simplified' memory management code introduced in CUDA 6

Before CUDA 6, left; and after, right (click to enlarge)

According to Gupta, this new capability reduces programming effort by almost 50 per cent. Not being a CUDA programmer himself, your Reg reporter will have to wait for reports from the field – or in the article comments – to judge the veracity of the Tesla honcho's assertion.

To support his point, Gupta said, "We have several programmers who have told us that their biggest pain point on day one was always managing the data movement and the memory and the memory management. And by taking care of that, automatically doing that, we've significantly improved programmer productivity."

There is, of course, still some latency involved in moving the data from where, for example, the CPU can work with it to where the GPU can get its hands – or cores – on it, but the developer doesn't have to worry about writing the code to transfer it, nor does the compiler have to deal with the extra lines of code that were previously necessary to accomplish that move.

CUDA 6 adds a few other niceties such as new drop-in libraries that replace some CPU libraries with GPU libraries, and some redesigned GPU libraries that automatically scale across up to eight GPUs in a single node.

But Gupta told us that what devs have been clamoring for most avidly is to be freed from memory-management chores, which Unified Memory provides.

With CUDA 6, he said, "The programmer just blissfully programs." ®

Bootnote

In a related development, Mentor Graphics has announced that it is adding support for OpenACC 2.0 into its GCC compiler, thus adding the ability to generate assembly-level instructions for Nvidia GPUs into that industry-standard tool.

Reducing security risks from open source software

More from The Register

next story
HIDDEN packet sniffer spy tech in MILLIONS of iPhones, iPads – expert
Don't panic though – Apple's backdoor is not wide open to all, guru tells us
Do YOU work at Microsoft? Um. Are you SURE about that?
Nokia and marketing types first to get the bullet, says report
Microsoft takes on Chromebook with low-cost Windows laptops
Redmond's chief salesman: We're taking 'hard' decisions
Cheer up, Nokia fans. It can start making mobes again in 18 months
The real winner of the Nokia sale is *drumroll* ... Nokia
EU dons gloves, pokes Google's deals with Android mobe makers
El Reg cops a squint at investigatory letters
Chrome browser has been DRAINING PC batteries for YEARS
Google is only now fixing ancient, energy-sapping bug
Big Blue Apple: IBM to sell iPads, iPhones to enterprises
iOS/2 gear loaded with apps for big biz ... uh oh BlackBerry
prev story

Whitepapers

Seven Steps to Software Security
Seven practical steps you can begin to take today to secure your applications and prevent the damages a successful cyber-attack can cause.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Consolidation: the foundation for IT and business transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.