Nvidia kicks out CUDA 3.1 for GPUs
Parallel Nsight plug-in for Visual Studio, too
If GPU coprocessors are going to go mainstream as adjuncts to CPUs in workstations and servers, the programming has to get easier and developers have to be able to exploit the languages, libraries, and development tools they have traditionally been using to create applications for PCs and servers.
With the launch of the CUDA 3.1 development kit and the Parallel Nsight plug-in for Microsoft's Visual Studio IDE, Nvidia is several steps closer to splashing in that mainstream.
While the CUDA 3.1 software development kit, which you can get here, was announced today alongside the Parallel Nsight plug-in, Ian Buck, software director for GPUs at Nvidia, says the code has actually been available since June 23 and has already gotten tens of thousands of downloads.
The original CUDA 1.X toolkit from 2007 had a C compiler made by Nvidia and C extensions to allow routines to be dispatched to Nvidia GPUs in a workstation or server. The SDK could do single precision math on one or more GPUs in a machine and supported 64-bit Windows XP platforms. In 2008, with the CUDA 2.X toolkit and the next generation of GPUs, Windows Vista and Mac OS X support was added, as was the ability to do double-precision math calculations on the "Tesla" family of GPUs; the Parallel Nsight plug-in for Visual Studio went into beta that year.
With the CUDA 3.0 toolkit, which came out in March of this year concurrent more or less with the "Fermi" GPUs, Nvidia added support for C++ class templates and class inheritance, beefing up its C++ support. (The official Fortran compiler for Nvidia GPUs comes from Portland Group, and according to Buck, there is no plan for Nvidia to cook up its own Fortran - or Java or PHP or any other languages other than C and C++.)
With CUDA 3.1, the SDK is getting a feature called GPUDirect, a technology Nvidia has developed in conjunction with InfiniBand networking specialist Mellanox to allow direct GPU-to-GPU data transfers over InfiniBand networks without getting those silly CPUs (who think they run everything in the system) involved. The GPUDirect APIs are about more than an InfiniBand adapter, of course, and have been written to give any third party device a means of accessing GPU memory directly. GPUDirect is supported on Quadro graphics cards and Tesla GPUs.
As you can see from the release notes, CUDA 3.1 also gives 16-way kernel concurrency, allowing for up to 16 different kernels to run at the same time on Fermi GPUs. Buck said a bunch of needed C++ features were added, such as support for function pointers and recursion to allow for more C++ apps to run on GPUs as well as a unified Visual Profiler that supports CUDA C/C++ as well as OpenCL. The math libraries in the CUDA 3.1 SDK were also goosed, with some having up to 25 per cent performance improvements, according to Buck.
Looking ahead, it doesn’t look like the GNU C/C++ compiler will ever be ported by Nvidia to GPUs, but there's nothing stopping the open source community from doing so. Looking ahead to the future of CUDA, Buck says that generally speaking, anything that a CPU can do in terms of programming will have to be supported on the GPU. That's why Nvidia will be adding more and more C++ features to CUDA over time, adding new libraries for better supporting image processing and better exploiting the parallelism in the GPU to do calculations. Generally speaking, the idea will be to have applications rely less and less on the CPU and just let the GPUs do all of the hard work. The plan calls for integrating GPUs more deeply into systems and allowing job schedulers uses to control parallel supercomputing clusters to reach in and control GPUs as they do CPUs today.
Proprietary; who cares?
It looks pretty awesome and as someone who's more interested in writing code than pretending I'll ever modify the source of someone else's compiler, it's cool to see nVidia dominating this area - I'd hope it becomes a standard ATI embrace too.
Also, nice to see us C++ people givena new lease of life on cutting-edge tech; function pointers, recursion, etc are stuff these whippersnappers can't stomach :)
NVidia only loves open source to a point.
NVidia has yet to release an open-source version of their drivers to allow Linux to better operate on their cards. NVidia is the primary reason that Sony decided to disable "Other OS" support on their consoles. And CUDA as an interface will never be on ATI cards short of a merger or buyout. Yes, OpenGL/CL is on their new cards, but not the latest version.
While I think Intel/NVidia's infighting over hardware architecture and graphics implementation is ridiculous and I side with NVidia on a lot of that mess, Big Green still isn't innocuous of "industry influence" on technologies. Gaming on Linux is still largely a joke compared to Windows systems (not the fault of the progenitors of Linux gaming: Wine, Cedega, indie developers doing what they can, etc., but rather short-sightedness and backroom dealing) thanks in part to companies like NVidia.
I didn't mean proprietary as in closed source
I meant it as in the language/infrastructure standard. OpenCL is a Kronos standard, so anyone can implement it. Not me, I couldn't — and I have no intention of looking at how anyone else has. But ATI could. CUDA is something NVidia made up and propagate for the purpose of selling NVidia products. It won't be coming to ATI cards any time soon.
But, as I was clear to point out, I'm just shouting ballyhoo from the sidelines without being directly affected. I just find it difficult to get excited about a technology being promoted by one manufacturer for the benefit of that manufacturer when it not only isn't the only game in town but the whole area is so nascent that no platform can be really described as dominant.