Oak Ridge goes gaga for Nvidia GPUs
Fermi chases Cell for HPC dough
Oak Ridge National Laboratories may not be the first customer that Nvidia will have for its new "Fermi" graphics processor, which was announced yesterday, but it will very likely be one of the largest customers.
Oak Ridge, one of the giant supercomputing centers managed and funded by the US Department of Energy to do all kinds of simulations and supercomputing design research, has committed to using the GPU co-processor variants of the Fermi chips, the kickers to the current Tesla GPU co-processors, in a future hybrid system that would have ten times the floating point oomph of the fastest supercomputer installed today.
Depending on the tests you want to use, the most powerful HPC box in the world is either the Roadrunner hybrid Opteron-Cell massively parallel custom blade box made by IBM for Los Alamos National Laboratory, or the Jaguar massively parallel XT5 machine at Oak Ridge, which uses only the Opterons to do calculations.
The Roadrunner machine relies on the Cell chips, which are themselves a kind of graphics processor with a single Power core linked into it, to do the heavy lifting on floating point calculations. The compute nodes in the Roadrunner are comprised of a two-socket blade server using dual-core Opteron processors running at 1.8GHz.
Advanced Micro Devices has six-core Istanbul Opterons in the field that are pressing up against the 3GHz performance barrier. But shifting to these faster x64 chips would not radically improve the overall performance of the Roadrunner machine.</p
Going faster miles an hour
Each Opteron blade uses HyperTransport links out to the PCI-Express bus to link to two dual-socket Cell blades. Each Cell processor is running at 3.2GHz, and has eight vector processors (which are used to do the graphics in the Sony PlayStation 3, among other tasks the Cell chips were created to do). The Cell chips also include one 64-bit Power core to manage these on-chip vector processors, which deliver 12.8 gigaflops of double-precision performance each.
Each Opteron core gets its own Cell chip to do its math for it, like the blonde who isn't dating the nerd but the nerd thinks is, and the beauty is that the x64 applications using the message passing interface (MPI) protocol to run parallelized applications runs with minor modifications on the hybrid Opteron-Cell box.
And each server node (one x64 node with two Cell co-processor nodes) can deliver 409.6 gigaflops of double-precision floating-point math. On the Linpack Fortran benchmark test, the Roadrunner with 129,600 cores is able to deliver 1.1 petaflops of sustained performance.
While the Jaguar XT5 machine at Oak Ridge is powerful, weighing in at 1.06 petaflops, it has to rely on its 150,152 cores to do the math. What Jaguar needs is some powerful nerds so its blondes can run code, and it looks like the next generation of machines at the supercomputer center are going to be using the Fermi GPUs.
Next page: Hybrid futures
Nice to know the RSX isn't doing anything...
"Each Cell processor is running at 3.2GHz, and has eight vector processors (which are used to do the graphics in the Sony PlayStation 3, among other tasks the Cell chips were created to do)."
Um, no, they're not (usually). The Cell architecture is a sort of half-way house between a GPU and a CPU, with a hefty dose of Cray thrown in, but it's not a GPU of itself. There's an nVidia "RSX" chip in the PS3 that does the graphics.
The Cell's a bit closer to a GPU than one might necessarily want in a system which has its own GPU - where it shines against a conventional CPU is in the kind of application where one might consider using a GPU instead - which does make me wonder whether the idea for the PS3 was to use the Cell on its own, and the RSX was added when it became obvious it wasn't going to be fast enough. This might explain why the PS3 seems to be a bit harder to program for than the XBox 360 - although I'm not a game developer, so I don't wish to make detrimental claims which may be hidden by the tool chain.
That said, the Cell is a bit more MIMD than most GPUs, so there's a class of problems for which it beats both a GPU-like heavily SIMD architeacture and the relatively-scalar CPUs. Nice to know that you have to get your algorithm right *before* buying a multi-million dollar supercomputer, isn't it?