Original URL: http://www.theregister.co.uk/2009/10/01/oak_ridge_fermi/
Oak Ridge goes gaga for Nvidia GPUs
Fermi chases Cell for HPC dough
Oak Ridge National Laboratories may not be the first customer that Nvidia will have for its new "Fermi" graphics processor, which was announced  yesterday, but it will very likely be one of the largest customers.
Oak Ridge, one of the giant supercomputing centers managed and funded by the US Department of Energy to do all kinds of simulations and supercomputing design research, has committed to using the GPU co-processor variants of the Fermi chips, the kickers to the current Tesla GPU co-processors, in a future hybrid system that would have ten times the floating point oomph of the fastest supercomputer installed today.
Depending on the tests you want to use, the most powerful HPC box in the world is either the Roadrunner hybrid Opteron-Cell massively parallel custom blade box made by IBM for Los Alamos National Laboratory, or the Jaguar massively parallel XT5 machine at Oak Ridge, which uses only the Opterons to do calculations.
The Roadrunner machine relies on the Cell chips, which are themselves a kind of graphics processor with a single Power core linked into it, to do the heavy lifting on floating point calculations. The compute nodes in the Roadrunner are comprised of a two-socket blade server using dual-core Opteron processors running at 1.8GHz.
Advanced Micro Devices has six-core Istanbul Opterons in the field that are pressing up against the 3GHz performance barrier. But shifting to these faster x64 chips would not radically improve the overall performance of the Roadrunner machine.</p
Going faster miles an hour
Each Opteron blade uses HyperTransport links out to the PCI-Express bus to link to two dual-socket Cell blades. Each Cell processor is running at 3.2GHz, and has eight vector processors (which are used to do the graphics in the Sony PlayStation 3, among other tasks the Cell chips were created to do). The Cell chips also include one 64-bit Power core to manage these on-chip vector processors, which deliver 12.8 gigaflops of double-precision performance each.
Each Opteron core gets its own Cell chip to do its math for it, like the blonde who isn't dating the nerd but the nerd thinks is, and the beauty is that the x64 applications using the message passing interface (MPI) protocol to run parallelized applications runs with minor modifications on the hybrid Opteron-Cell box.
And each server node (one x64 node with two Cell co-processor nodes) can deliver 409.6 gigaflops of double-precision floating-point math. On the Linpack Fortran benchmark test, the Roadrunner with 129,600 cores is able to deliver 1.1 petaflops of sustained performance.
While the Jaguar XT5 machine at Oak Ridge is powerful, weighing in at 1.06 petaflops, it has to rely on its 150,152 cores to do the math. What Jaguar needs is some powerful nerds so its blondes can run code, and it looks like the next generation of machines at the supercomputer center are going to be using the Fermi GPUs.
It would not be at all surprising to see a hybrid architecture for the future Oak Ridge machine that uses PCI-Express 2.0 links to hook Fermi GPUs into Opteron server nodes, just like IBM is using PCI-Express 1.0 links to hook Cell boards into the Opteron nodes with Roadrunner.
Jeff Nichols, associate lab director for computing and computational sciences at Oak Ridge, said in a statement that the Fermi GPUs, which have eight times the double precision floating point performance as the Teslas, at around 500 gigaflops, would enable "substantial scientific breakthroughs that would be impossible without the new technology."
Working with the future Tesla GPU coprocessors and their successors, Oak Ridge is hoping to push up into the exaflops barrier within ten years. Getting to 10 petaflops next year with a parallel super that uses the Fermi GPUs is just a down payment.
The important thing about the Fermi GPUs is that the CUDA programming environment from nVidia supports not just C, but C++ as well. When Fortran compilers can see and dispatch work to the GPUs, the combination of decent double-precision performance and C++ and Fortran support will truly push GPU co-processors into the mainstream. This is exactly what Nvidia, AMD (with its FireStream GPUs), and Intel (with its Larrabee GPUs) are all hoping for.
The question now is, what will Big Blue do to counter these moves onto its hybrid supercomputing turf?
Several years back, when the Cell chips were first being commercialized and offered terrible double-precision floating-point performance (like 42 gigaflops versus 460 gigaflops for a two-socket Cell blade), Big Blue's roadmap called for a Cell board with two sockets that could deliver 460 gigaflops of single-precision and 217 gigaflops of double-precision math. We know this blade server as the BladeCenter QS22.
The roadmap also called for a BladeCenter QS2Z, which would have Cell chips that in turn had two Power cores and a whopping 32 vector processors each, using a next-generation memory and interconnection technology; the QS2Z blade would sport 2 teraflops per blade at single precision and 1 teraflops per blade at double precision.
That's about twice the oomph in a Cell chip compared to the forthcoming Fermi GPUs. Oak Ridge knew that, of course, but maybe this future Cell chip never made it out of the concept stage, as it was in early 2007.
IBM is mum on its Cell roadmap plans at this point, but this future Cell chip was slated for delivery in the first half of 2010, more or less concurrent with the Fermi GPU co-processors. ®