Intel Larrabee letdown leaves HPC to Nvidia's Fermi
Not so discrete graphics co-processors
Larrabeen or Neverbee?
The basic idea behind Larrabee is simple enough: take a stripped down x64 core, crank up its floating point performance by a factor of four with a 512-bit vector processing unit, sprinkle on some special instructions and registers for precise graphics jobs, and plunk anywhere from 24 to 48 working cores on a single die with a ring bus interconnect that maintains cache coherency between the cores. Basically, you make a glorified x64 system on a chip work as a graphics co-processor.
The benefit of this approach, in theory at least, is that Intel is reusing components both its engineers and external software engineers know well. In theory, writing code that spans x64 GPUs and Larrabee GPUs should be easier than programming for hybrid CPU-GPU setups because of the similarity of the instruction sets.
This should have been the case with Intel's Itanium chip, of course, but when HP came along and partnered with Intel on Itanium, somehow HP's PA-RISC instruction set got mixed up into the future 64-bit Intel architecture. - and what emerged was something that was incompatible with both. We see how well that worked out.
Maybe Intel's rumored fears about the performance of Larrabee GPUs have nothing to do with the chip at all, but rather more to do with how difficult Larrabee chips are to program despite the fact that they are based on an x64 core.
Intel is not saying, of course. In fact, all that the chip maker will say is that it is committed to entering the discrete graphics space, as planned for years, and that it will not even discuss its plans until sometime next year.
As far as HPC shops are concerned, Intel needs to address three things before Larrabee's successors can even be thought of as possible alternatives to a Fermi card from Nvidia for a hybrid supercomputer cluster.
First, the future Intel discrete GPU needs error correction on whatever memory is used on the GPU card and any links between the GPU and the memory systems of the CPU that dispatch number-crunching work and its data to the GPU. Second, the Larrabee successor has to have decent double-precision floating point performance. Nvidia is getting double-precision floating point up where it needs to be with the Fermi chips, but the prior Teslas were pretty awful on this front and, coupled with the lack of ECC, this really hindered the adoption of GPUs as co-processors. And finally, the chips need a low power envelope. While Intel never copped to how much juice a Larrabee co-processor burned, there was talk that the demo boards chewed through 300 watts.
Just to show you how Nvidia is setting the pace, the C2050 and C2070 Fermi GPUs will deliver double-precision performance in the range of 520 and 630 gigaflops; the C2050 has 3GB of GDDR5 graphics memory, while the C2070 has 6GB of local graphics memory. The typical power consumption of this card is rated at 192 watts, with 225 watts peak. If Larrabee was offering anything less in terms of gigaflops per watt for double-precision calculations, then it doesn't make any sense to bring it to market except as a development platform.
So what is Intel to do? Well, buying Nvidia would be an attractive option if it really wanted to make the managers at AMD freak out and give antitrust regulators in the United States and Europe plenty to work on for the next couple of years. But Intel can't afford to buy Nvidia anyway, which has a market capitalization of just under $9bn as we go to press - which, by the way, is completely nuts for a company that had $3.49bn in sales in the past four quarters and $541.1m in losses. (But hey, that's Wall Street for you.) The reason why Nvidia and AMD shareholders breathed a sigh of relief this week as Larrabee was mothballed is not that the graphics chip business is not all that great, but that competition from deep-pocketed Intel was not going to make it worse.
It would be funny if Intel tried to buy Nvidia, but that is not going to happen. And no one else, not even an IBM that needs to do something interesting if it wants to have a replacement for Cell in the HPC racket, can afford to pay what Nvidia would likely cost to acquire. But if IBM wasn't worried about taking some losses in the short term until the PC market recovers, and it stopped its crack-habit stock buybacks, the company could shell out the $15bn or so it might take to bag Nvidia. Of course, IBM balked at paying half that for Sun Microsystems, and that was a company that has three times the revenues as Nvidia and about the same lack of profits.
It looks like Nvidia will have the GPU co-processor racket for HPC customers to itself for a while, then. The CUDA programming environment for the Tesla family of GPUs is just icing on the cake. ®
Sponsored: Optimizing the hybrid cloud