Mutant number-crunchers win cluster popularity contest

CUDA you dig it? Yes, you can

HPC blog Hybrid computing has come a very long way in a relatively short period of time. My first exposure to hybrids came at SC08 in the lovely city of Austin, Texas. Earlier that year, the Roadrunner system at Los Alamos National Lab had achieved two milestones: 1) It was the first system to break through the petabyte barrier; and 2) It was the first high-profile hybrid system.

Roadrunner was a combination of IBM 8-core CellBE accelerators and AMD Opteron CPUs. In my first meeting with NVIDIA’s Tesla team at SC08, I was sceptical. To me, it looked like IBM had a winning combination and would be off to the races, building smaller HPC hybrid systems and even commercial versions (like they did a little while later with Cell blades). IBM talked about building a vibrant ecosystem around Cell and ensuring that potential customers had all of the apps and tools they’d ever need to take advantage of the accelerated goodness promised by this new hybrid architecture.

NVIDIA agreed that the ecosystem was the key, and pointed to work that they were doing with this new CUDA environment. But in 2008, it wasn’t an overwhelming success. Here are some stats (taken from Jen-Hsun Huang’s GTC12 keynote).


The column of figures at the far left shows where CUDA was in 2008. Those are the figures that NVIDIA cited to me in our meeting at SC08. They had 150,000 CUDA downloads, one decent-sized supercomputer, some interest from a relatively small number of universities, and around 4,000 papers. Not bad, but not necessarily ‘the next big thing’ either.

But take a look at the column on the right. That’s where CUDA is today: over 1.5 million downloads (one per second) and 22,500 academic papers.

More importantly, there are 35 NVIDIA-fuelled hybrid supercomputers on the Top500 list today. The NDUT Tianhe-1A system, with 14,300 CPUs and 7,100 NVIDIA GPUs, held down the top spot on the list in 2010. The upcoming Oak Ridge Titan system will sport more than 18,000 CPUs alongside 18,000 GPUs, and should become the fastest supercomputer in the world sometime this fall.

The final statistic to point out is the number of universities that have made CUDA part of their curriculum. This number has grown from a reasonably respectable 60 in 2008 to 560 today. As some of you may know, I write about the SC- and ISC-sponsored student cluster competitions, in which college undergraduates put together and run their own clustered systems in a pursuit of supercomputing glory.

GPUs entered into the student cluster competition mix in 2010, but didn’t have a huge impact. The student teams sporting GPUs hadn’t had a lot of experience with them and didn’t have optimized codes for all of their apps.

In 2011, however, it was a different story. About half of the teams were using hybrid systems; some skewed more toward GPUs while others had only a smattering of them. The top two teams, both of whom had more than a smattering of GPU goodness, won the competition, and the GPU-enabled teams took four of the top five spots.

Why is it so important that universities are teaching CUDA? To me, it means that CUDA has “arrived” and isn’t going anywhere anytime soon. Student cluster competition teams using GPU-accelerated systems are a small sample set, sure; but I think it’s a significant data point. In 2010, there were one or two teams using GPUs, and they finished in the middle of the pack. In 2011, half the teams used GPUs, and they posted four of the top five scores. That’s quite an advance in just 12 months. ®

Sponsored: The Joy and Pain of Buying IT - Have Your Say

Biting the hand that feeds IT © 1998–2017