Intel Larrabee letdown leaves HPC to Nvidia's Fermi
Not so discrete graphics co-processors
Comment Intel has never been particularly precise about what its "Larrabee" graphics chips were, so it is difficult to be sure how disappointed we should all be. And considering the company's track record outside of the x86 and x64 chip racket - its failed networking business and Itanium are but two examples of its woes - it's hard to say what to expect from Intel when, and if, it finally markets discrete graphics chips that can also be used as number-crunching co-processors for servers and workstations.
At the SC09 supercomputing trade show in November, when Justin Rattner, Intel's chief technology officer, gave the keynote presentation and his finale was to demonstrate a Larrabee co-processor being overclocked so it could hit 1 teraflops of single-precision floating point performance, there was no hint that Larrabee was on the rocks - the demonstration was clearly meant to show quite the opposite. That made the abrupt change last week that turned Larrabee from a graphics chip that was expected some time next year to a "software development platform for internal and external use," as Intel's statement put it, all the more jarring.
Advanced Micro Devices just keeps getting cut break after break, doesn't it? First Intel settles all outstanding antitrust lawsuits with AMD for a cool $1.25bn, allowing AMD to pay down some debts and clean up its books. And now Intel is delaying its entry into the discrete graphics market at the same time that IBM is admitting that it will no longer develop new PowerPC Cell co-processors for use in its own server lines.
The delayed entry of Intel's Larrabee and the dead-ending of IBM's Cell (at least on blade servers) gives AMD's Firestream GPUs a better chance against Nvidia's technically impressive Fermi family of Tesla 20 GPUs. The Fermi chips will be available as graphics cards in the first quarter of next year and will be ready as co-processors and complete server appliances from Nvidia in the second quarter. And they will likely get dominant market share, too, particularly among supercomputer customers who want to have error correction on the GPUs - a feature that AMD's Firestream GPUs currently lack.
But still, a market always wants at least two alternatives, even if it rarely wants more than three, and that means AMD still has time to get ECC onto its Firestream GPUs and compete head-to-head with Nvidia's Fermi GPUs.
Larrabeen or Neverbee?
The basic idea behind Larrabee is simple enough: take a stripped down x64 core, crank up its floating point performance by a factor of four with a 512-bit vector processing unit, sprinkle on some special instructions and registers for precise graphics jobs, and plunk anywhere from 24 to 48 working cores on a single die with a ring bus interconnect that maintains cache coherency between the cores. Basically, you make a glorified x64 system on a chip work as a graphics co-processor.
The benefit of this approach, in theory at least, is that Intel is reusing components both its engineers and external software engineers know well. In theory, writing code that spans x64 GPUs and Larrabee GPUs should be easier than programming for hybrid CPU-GPU setups because of the similarity of the instruction sets.
This should have been the case with Intel's Itanium chip, of course, but when HP came along and partnered with Intel on Itanium, somehow HP's PA-RISC instruction set got mixed up into the future 64-bit Intel architecture. - and what emerged was something that was incompatible with both. We see how well that worked out.
Maybe Intel's rumored fears about the performance of Larrabee GPUs have nothing to do with the chip at all, but rather more to do with how difficult Larrabee chips are to program despite the fact that they are based on an x64 core.
Intel is not saying, of course. In fact, all that the chip maker will say is that it is committed to entering the discrete graphics space, as planned for years, and that it will not even discuss its plans until sometime next year.
As far as HPC shops are concerned, Intel needs to address three things before Larrabee's successors can even be thought of as possible alternatives to a Fermi card from Nvidia for a hybrid supercomputer cluster.
First, the future Intel discrete GPU needs error correction on whatever memory is used on the GPU card and any links between the GPU and the memory systems of the CPU that dispatch number-crunching work and its data to the GPU. Second, the Larrabee successor has to have decent double-precision floating point performance. Nvidia is getting double-precision floating point up where it needs to be with the Fermi chips, but the prior Teslas were pretty awful on this front and, coupled with the lack of ECC, this really hindered the adoption of GPUs as co-processors. And finally, the chips need a low power envelope. While Intel never copped to how much juice a Larrabee co-processor burned, there was talk that the demo boards chewed through 300 watts.
Just to show you how Nvidia is setting the pace, the C2050 and C2070 Fermi GPUs will deliver double-precision performance in the range of 520 and 630 gigaflops; the C2050 has 3GB of GDDR5 graphics memory, while the C2070 has 6GB of local graphics memory. The typical power consumption of this card is rated at 192 watts, with 225 watts peak. If Larrabee was offering anything less in terms of gigaflops per watt for double-precision calculations, then it doesn't make any sense to bring it to market except as a development platform.
So what is Intel to do? Well, buying Nvidia would be an attractive option if it really wanted to make the managers at AMD freak out and give antitrust regulators in the United States and Europe plenty to work on for the next couple of years. But Intel can't afford to buy Nvidia anyway, which has a market capitalization of just under $9bn as we go to press - which, by the way, is completely nuts for a company that had $3.49bn in sales in the past four quarters and $541.1m in losses. (But hey, that's Wall Street for you.) The reason why Nvidia and AMD shareholders breathed a sigh of relief this week as Larrabee was mothballed is not that the graphics chip business is not all that great, but that competition from deep-pocketed Intel was not going to make it worse.
It would be funny if Intel tried to buy Nvidia, but that is not going to happen. And no one else, not even an IBM that needs to do something interesting if it wants to have a replacement for Cell in the HPC racket, can afford to pay what Nvidia would likely cost to acquire. But if IBM wasn't worried about taking some losses in the short term until the PC market recovers, and it stopped its crack-habit stock buybacks, the company could shell out the $15bn or so it might take to bag Nvidia. Of course, IBM balked at paying half that for Sun Microsystems, and that was a company that has three times the revenues as Nvidia and about the same lack of profits.
It looks like Nvidia will have the GPU co-processor racket for HPC customers to itself for a while, then. The CUDA programming environment for the Tesla family of GPUs is just icing on the cake. ®