Original URL: https://www.theregister.com/2010/09/23/ian_buck_at_gtc/

CUDA daddy muses on future GPUs

'G'bye CPU, we hardly knew ye'

By Rik Myslewski

Posted in Software, 23rd September 2010 04:00 GMT

GTC If you think of Nvidia as a hardware company, you're only three-quarters correct — at least according to Ian Buck, Nvidia's senior director of GPU computing software, who sees an even greater role for software as his company's products evolve.

"It's important to note that software and hardware really are one," Buck told attendees of Nvidia's GPU Technical Conference in San José, California, Wednesday. "I believe Nvidia at this point is one-quarter software engineers, in term of engineering staff. So it's a big part of what we do."

Buck is one of the prime forces behind Nvidia's CUDA (compute unified device architecture) parallel-processing ecosystem, and his focus — as his title suggests — lies heavily on the "C for CUDA" software side.

At its San José conference, Nvidia heavily emphasized its products' applicability to the HPC market. Games got their moments in the sun, but HPC was where the action was — and, for that matter, where Nvidia's execs are steering the company.

And it was towards the multi-node, clustered, GPU-centric HPC world that Buck aimed his remarks — especially after he first gave an overview of Nvidia's past and then settled into talking about its future.

"There's a couple of places we're innovating," he told his audience of engineers, scientists, and deisgners. "One is within the node."

In a single compute node of a multi-node cluster, Buck explained, "OS integration continually challenges how a GPU is a coprocessor to the CPU. How does that boil down into a basic operating system responsibility, now that there's two kinds of processors in the system?"

Buck had a few ideas about how that node-level CPU/GPU communication might be improved: "Scheduling, preemption, virtual memory — [Nvidia CEO] Jen-Hsun hinted at some of that stuff in his keynote."

Buck also discussed simplification of the CUDA programming model: "If we integrate the GPU and its memory and its scheduling more with the CPU," he said, "we can get simplifiction for optimization for the programming model which will make it easier to move code onto the GPU — and we will be doing that in future releases of CUDA."

Beyond improvements within the node, Buck also discussed plans to extend improvements into the cluster. "When you have a cluster of a hundred nodes, or ten-thousand nodes, or a hundred-thousand nodes, how can the GPU better interface with the rest of the infrastructure?"

He answered himself: "We can be doing better on things like MPI and sharing communications. We've taken some good first steps on allowing high-speed network hardware — InfiniBand — to directly access and communicate with GPUs. I expect more things to be happening there, but clearly more of your data is going to be living on the GPU, and we need to make sure that that data can communicate with other GPUs, within the node or across the network."

Buck — unsurprisingly — wants to move more of an average workload onto GPUs: "Think about enhancing the programming model, both at the software and the hardware level, to just simply move more of the data, more of the problem to the GPU."

One of the goals of tweaking the programming model will be to transition code originally designed for CPUs onto GPUs. Or, as Buck described that goal, to "ease some of the transition pains from legacy, 15-year-old, 30-year-old Fortran code onto the GPU."

When asked about tighter hardware integration of the CPU and GPU — putting a CPU on the GPU die, for example, to help ease memory-bandwidth restraints — Buck deflected the query, saying: "Fundamentally I want to be thinking about what parts of my problems are ... serial computation, and [what parts are] data-parallel computation."

This munging of an essentially serial CPU with an essentially parallel GPU, of course, is how Intel has architected its upcoming Sandy Bridge processors, and is also at the heart of AMD's "right around the corner, we promise" Fusion line of APUs — which is how that company dubs its "accelerated processing units".

But Nvidia's target — at least on the level with which Buck and his CUDA team are involved — looks beyond the Intel's weak-sister integrasted graphics or AMD's gamers' delight eyeball-melting graphics cards. Buck's target is HPC.

And for that market, he raised one interesting bit of futuristic conjecture: "Can a GPU be changing its level of parallelism to go more serial [or] to go more parallel as it needs to by nature of the program?"

When or if that should ever happen, and when or if that development migrates down from the petaflop heights of HPC to consumer desktops, our children may chuckle at the days when their forebears called one type of chip — or even one area of a chip — a CPU and another a GPU.

Except, of course, if all they want to do is play Crysis. ®