Feeds

Intel Larrabee letdown leaves HPC to Nvidia's Fermi

Not so discrete graphics co-processors

Protecting against web application threats using SSL

Larrabeen or Neverbee?

The basic idea behind Larrabee is simple enough: take a stripped down x64 core, crank up its floating point performance by a factor of four with a 512-bit vector processing unit, sprinkle on some special instructions and registers for precise graphics jobs, and plunk anywhere from 24 to 48 working cores on a single die with a ring bus interconnect that maintains cache coherency between the cores. Basically, you make a glorified x64 system on a chip work as a graphics co-processor.

The benefit of this approach, in theory at least, is that Intel is reusing components both its engineers and external software engineers know well. In theory, writing code that spans x64 GPUs and Larrabee GPUs should be easier than programming for hybrid CPU-GPU setups because of the similarity of the instruction sets.

This should have been the case with Intel's Itanium chip, of course, but when HP came along and partnered with Intel on Itanium, somehow HP's PA-RISC instruction set got mixed up into the future 64-bit Intel architecture. - and what emerged was something that was incompatible with both. We see how well that worked out.

Maybe Intel's rumored fears about the performance of Larrabee GPUs have nothing to do with the chip at all, but rather more to do with how difficult Larrabee chips are to program despite the fact that they are based on an x64 core.

Intel is not saying, of course. In fact, all that the chip maker will say is that it is committed to entering the discrete graphics space, as planned for years, and that it will not even discuss its plans until sometime next year.

As far as HPC shops are concerned, Intel needs to address three things before Larrabee's successors can even be thought of as possible alternatives to a Fermi card from Nvidia for a hybrid supercomputer cluster.

First, the future Intel discrete GPU needs error correction on whatever memory is used on the GPU card and any links between the GPU and the memory systems of the CPU that dispatch number-crunching work and its data to the GPU. Second, the Larrabee successor has to have decent double-precision floating point performance. Nvidia is getting double-precision floating point up where it needs to be with the Fermi chips, but the prior Teslas were pretty awful on this front and, coupled with the lack of ECC, this really hindered the adoption of GPUs as co-processors. And finally, the chips need a low power envelope. While Intel never copped to how much juice a Larrabee co-processor burned, there was talk that the demo boards chewed through 300 watts.

Just to show you how Nvidia is setting the pace, the C2050 and C2070 Fermi GPUs will deliver double-precision performance in the range of 520 and 630 gigaflops; the C2050 has 3GB of GDDR5 graphics memory, while the C2070 has 6GB of local graphics memory. The typical power consumption of this card is rated at 192 watts, with 225 watts peak. If Larrabee was offering anything less in terms of gigaflops per watt for double-precision calculations, then it doesn't make any sense to bring it to market except as a development platform.

So what is Intel to do? Well, buying Nvidia would be an attractive option if it really wanted to make the managers at AMD freak out and give antitrust regulators in the United States and Europe plenty to work on for the next couple of years. But Intel can't afford to buy Nvidia anyway, which has a market capitalization of just under $9bn as we go to press - which, by the way, is completely nuts for a company that had $3.49bn in sales in the past four quarters and $541.1m in losses. (But hey, that's Wall Street for you.) The reason why Nvidia and AMD shareholders breathed a sigh of relief this week as Larrabee was mothballed is not that the graphics chip business is not all that great, but that competition from deep-pocketed Intel was not going to make it worse.

It would be funny if Intel tried to buy Nvidia, but that is not going to happen. And no one else, not even an IBM that needs to do something interesting if it wants to have a replacement for Cell in the HPC racket, can afford to pay what Nvidia would likely cost to acquire. But if IBM wasn't worried about taking some losses in the short term until the PC market recovers, and it stopped its crack-habit stock buybacks, the company could shell out the $15bn or so it might take to bag Nvidia. Of course, IBM balked at paying half that for Sun Microsystems, and that was a company that has three times the revenues as Nvidia and about the same lack of profits.

It looks like Nvidia will have the GPU co-processor racket for HPC customers to itself for a while, then. The CUDA programming environment for the Tesla family of GPUs is just icing on the cake. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
Seagate's triple-headed Cerberus could SAVE the DISK WORLD
... and possibly bring us even more HAMR time. Yay!
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.