Appro: HPC's all about the GPUs
Magny-Cours as AMD 'comeback kid'
SC09 In the supercomputer racket, you can be a niche player, a volume server maker with some HPC smarts, or a carcass that one of the other two feasts on.
As a boutique supplier of high performance parallel clusters, Appro International has to use every weapon it can get its hands on to distinguish itself from IBM, Cray, and Silicon Graphics. And it has to find an edge and make sales to prevent itself from becoming a carcass like the original SGI (eaten by Rackable Systems and renamed Silicon Graphics), the original Cray (eaten by Tera Computer and renamed Cray) Linux Networx (eaten by SGI before it was eaten by Rackable), Thinking Machines, Kendall Square, and Cray Research (which Sun Microsystems ate with nary a burp).
John Lee, vice president of advanced technology solutions at Appro, makes no bones about it. "To be a leader in HPC, we have to take advantage of bleeding-edge technology," says Lee. "And we believe that GPU will take off in HPC next year," he adds, referring to the graphical processing unit co-processors that Nvidia and Advanced Micro Devices have been selling and Intel is hoping to get to market sometime before too long.
"It now it is like having a third viable candidate. We have always had Intel and AMD for running code, but now Nvidia is going to be out there with Fermi."
The Fermi kickers to the current Tesla GPU co-processors were detailed in early October by Nvidia and have already been tapped by Oak Ridge National Laboratory, one of the US Department of Energy super centers, to be a part of a next-generation cluster it plans to build.
Unlike the Tesla GPUs, which have crap performance on double precision floating point math, the Fermi GPUs will deliver around 500 gigaflops each. This is enough for Oak Ridge to be talking about building a 10 petaflops hybrid super.
But according to Lee, the double precision math is not the only breakthrough coming with the Fermi GPUs. Error correction is key, and something that has been missing from all GPUs to date. "If bits flip here and there, which is the point of having a machine run very fast if you can't trust the answer you are going to get?" Lee asks rhetorically. "No serious HPC center will touch these GPUs until there is error correction."
Appro already bundles Tesla GPUs in its HyperPower clusters, which can deliver 304 x64 processor cores and 18,240 GPU cores for a machine that yields 6.56 teraflops per rack in double precision floating point and 78 teraflops per rack in single precision. But very few applications can make use of single precision in the HPC space, and without error correction, that's two strikes against the current crop of GPUs.
When the Fermi GPUs from Nvidia are ready - the word on the street is that Nvidia will start shipping them in the first half of 2010 - Appro says it will bundle them into its HyperPower machines as well as its Xtreme-X1 high-end supercomputers, which currently do not support GPUs. (The Xtreme-X1 line is also being rigged with Intel flash memory and future "Sandy Bridge" Xeons for the "Gordon" super at the San Diego Supercomputer Center, a $20m deal Appro announced last week for delivery in 2011).
As for IBM's Cell co-processors (which are not GPUs but which have multiple extra processing units wrapped around a Power core that provide similar math power) and Intel's future "Larrabee" x64-compatible GPUs, Lee doesn't have much enthusiasm for them. "We believe that Cell is on its way out," says Lee. "With what Nvidia has done, Cell will have a very short life. When we talk about GPU computing, Nvidia is really the only viable player - not just because of Fermi, but because of CUDA."
CUDA is the parallel programming environment that allows C programs to call the GPU to feed it math. It still needs C++ and Fortran hooks, by the way, but hopefully these will be ready with Fermi. If AMD gets error correction on its Firestream GPUs, there's a chance for AMD to step up and compete, and Lee is not silly enough to count of Intel's Larrabee entirely. "As Intel has shown with the Nehalem Xeons, when it focuses, it can deliver." All that said, Lee believes Nvidia has a 24 month lead in GPUs, which is forever in the supercomputing space.
That's a little bit harsh on the Cell chip, which has delivered better double-precision floating point performance than Nvidia Teslas, which have been in the market for two years, and which are used in the second-most powerful supercomputer in the world, the 1 petaflops "Roadrunner" hybrid Opteron-Cell super at Los Alamos National Laboratory. IBM's two-socket QS22 blade server delivers 460 gigaflops of single-precision and 217 gigaflops of double-precision math.
Next page: Big Blue competition
"Error correction inadequate"
Fixing up bit flips may seem important, but is hardly sufficient for a "serious HPC center" to trust the alleged answers generated. As is well-known, there are many other sources of error: floating-point rounding, measurement error, approximate physical constants, use of discrete models in place of continuous models. I wouldn't trust any answer, regardless of the presence of logic detecting bit-flips, unless the solutions are accompanied by guaranteed bounds on the errors. Now if these new GPUs had hardware implementations of Interval Arithmetic instructions, that might be something to get excited about.