Feeds

Mutant number-crunchers win cluster popularity contest

CUDA you dig it? Yes, you can

Next gen security for virtualised datacentres

HPC blog Hybrid computing has come a very long way in a relatively short period of time. My first exposure to hybrids came at SC08 in the lovely city of Austin, Texas. Earlier that year, the Roadrunner system at Los Alamos National Lab had achieved two milestones: 1) It was the first system to break through the petabyte barrier; and 2) It was the first high-profile hybrid system.

Roadrunner was a combination of IBM 8-core CellBE accelerators and AMD Opteron CPUs. In my first meeting with NVIDIA’s Tesla team at SC08, I was sceptical. To me, it looked like IBM had a winning combination and would be off to the races, building smaller HPC hybrid systems and even commercial versions (like they did a little while later with Cell blades). IBM talked about building a vibrant ecosystem around Cell and ensuring that potential customers had all of the apps and tools they’d ever need to take advantage of the accelerated goodness promised by this new hybrid architecture.

NVIDIA agreed that the ecosystem was the key, and pointed to work that they were doing with this new CUDA environment. But in 2008, it wasn’t an overwhelming success. Here are some stats (taken from Jen-Hsun Huang’s GTC12 keynote).

cuda_by_the_numbers

The column of figures at the far left shows where CUDA was in 2008. Those are the figures that NVIDIA cited to me in our meeting at SC08. They had 150,000 CUDA downloads, one decent-sized supercomputer, some interest from a relatively small number of universities, and around 4,000 papers. Not bad, but not necessarily ‘the next big thing’ either.

But take a look at the column on the right. That’s where CUDA is today: over 1.5 million downloads (one per second) and 22,500 academic papers.

More importantly, there are 35 NVIDIA-fuelled hybrid supercomputers on the Top500 list today. The NDUT Tianhe-1A system, with 14,300 CPUs and 7,100 NVIDIA GPUs, held down the top spot on the list in 2010. The upcoming Oak Ridge Titan system will sport more than 18,000 CPUs alongside 18,000 GPUs, and should become the fastest supercomputer in the world sometime this fall.

The final statistic to point out is the number of universities that have made CUDA part of their curriculum. This number has grown from a reasonably respectable 60 in 2008 to 560 today. As some of you may know, I write about the SC- and ISC-sponsored student cluster competitions, in which college undergraduates put together and run their own clustered systems in a pursuit of supercomputing glory.

GPUs entered into the student cluster competition mix in 2010, but didn’t have a huge impact. The student teams sporting GPUs hadn’t had a lot of experience with them and didn’t have optimized codes for all of their apps.

In 2011, however, it was a different story. About half of the teams were using hybrid systems; some skewed more toward GPUs while others had only a smattering of them. The top two teams, both of whom had more than a smattering of GPU goodness, won the competition, and the GPU-enabled teams took four of the top five spots.

Why is it so important that universities are teaching CUDA? To me, it means that CUDA has “arrived” and isn’t going anywhere anytime soon. Student cluster competition teams using GPU-accelerated systems are a small sample set, sure; but I think it’s a significant data point. In 2010, there were one or two teams using GPUs, and they finished in the middle of the pack. In 2011, half the teams used GPUs, and they posted four of the top five scores. That’s quite an advance in just 12 months. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?