This article is more than 1 year old

Nvidia's new CUDA 6 has the 'most significant new functionality in the history of CUDA'

Goal: To make programming their finicky but muscular GPUs an easier task for mere mortals

Nvidia has released CUDA 6, an upgrade to its proprietary GPU programming language that it says "includes some of the most significant new functionality in the history of CUDA."

For our money, the most important aspect of CUDA 6 is its unified memory scheme, which The Reg described in some detail when the CUDA Toolkit 6.0 was announced last November. In a nutshell, unified memory frees you from having to explicitly copy data back and forth from the CPU and GPU's memory spaces.

No more. In CUDA 6, "Managed memory is accessible to both the CPU and GPU using a single pointer," Nvidia GPU honcho Mark Harris writes in a blog post.

"The key is that the system automatically migrates data allocated in Unified Memory between host and device so that it looks like CPU memory to code running on the CPU, and like GPU memory to code running on the GPU."

In that same post – although Harris put unified memory at the top of his list and for which he has provided a detailed explanation – he also touts four other "most important new features of CUDA 6."

First of the four runners-up is the fact that CUDA is now supported on Nvidia's Tegra K1 system-on-a-chip (SoC) for embedded and mobile usage cases, which accomplishes the company's long-time goal of "CUDA Everywhere".

The Tegra K1, Nvidia's latest mobile processor, couples a 192-core Kepler GPU with a quad-core ARM Cortex-A15 CPU, along with integrated video encoding and decoding, image/signal processing, and other niceties that Harris lumps into "many other system-level features." The Tegra K1 is the SoC powering the Jetson TK1 development board introduced at Nvidia's recent GPU Developers Conference.

Nvidia's Jetson TK1 development board

The Jetson TK1 development board – 192 CUDA cores for $192

Harris also writes in cryptic CUDAese that "CUDA 6 introduces XT Library interfaces which provide automatic scaling of cuBLAS level 3 and 2D/3D cuFFT routines to 2 or more GPUs."

What this translates to is that if you have one or more dual-GPU cards in your system – think HPC – they'll automatically be taken advantage of for fast Fourier transforms and matrix-matrix multiplication, and that matrixes that are too large to fit into a single GPU's memory can take advantage of the CPU's memory, as well.

Then there's the ability to develop software on your personal machine and run it on a remote device – whether it be a heavy-breathing HPC cluster or a li'l Jetson K1 – by using Nvidia's NSight Eclipse Edition.

"Edit source code in the IDE running on your local PC (e.g. a laptop), then build, run, debug, and profile the application remotely on a server with a CUDA-capable GPU," Harris writes.

There are also a series of improvements to the CUDA development environment, along with what he characterizes as many new features, improvements, and bug fixes in the CUDA APIs, libraries, and developer tools. If all of those details interest you, you can check out the CUDA Toolkit 6.0 release notes [PDF].

Or if you'd prefer to simply dive in and discover what's what on your own, you can download CUDA 6 at Nvidia's CUDA Zone. ®

More about

TIP US OFF

Send us news


Other stories you might like