Original URL: https://www.theregister.com/2010/09/27/olds_blog/

IBM Goes 'GPU-riffic' with new blade

GPU goodness — not just for HPC anymore

By Dan Olds

Posted in HPC, 27th September 2010 23:31 GMT

GTC Video Blog IBM made big news on the first day of last week's GPU Technology Conference by announcing that it'll roll out an Nvidia Fermi–based expansion blade. While it's not formally announced yet (the plan is to do so in Q4), IBM had one at the show and walked me through it for the video below.

It isn't a standalone server; it's a single Fermi GPU with 6GB memory that clicks onto a host – initially an IBM HS22 two-socket blade with 12 DDR3 DIMM slots and all the typical IBM bladey trimmings. It attaches to the host blade via a 16x Gen-2 PCIe port.

The ports on the Fermi blade extension have a pass-through feature so that up to four expansion blades can be attached to a single host. That's a hell of a lot of GPU goodness — and it's certainly GPU-riffic.

Also GPU-riffic is the fact that these expansion blades fit into the most popular IBM BladeCenter chassis (the E, H, and S varieties). This is significant in that it marks a big step for GPU computing. There are a lot of these blade chassis (and H22 blades) out in Customer-Land, and these new GPU blades really open up the market for non-HPC centric buyers to give GPUs a try.

These products, and others like them from HP and Dell, make GPUs a standard IT item rather than some exotic technology that's suitable only for labs, Hadron Colliders, or data centers supporting super villains.

IBM also showed the Fermi offering for their ultra-dense iDataplex server, as you'll see on the video. I remember getting an early briefing back when iDataplex was just a vision in Jimmy the Bull's mind. At that time, it was firmly aimed only at the mega Net 2.0-type data centers (think Yahoo!, Google, and folks like that). It was supposed to be extremely dense and efficient, with servers that had only what was absolutely necessary to function — with no additional RAS or management hardware features.

I had two initial thoughts during those discussions. The first was that I didn't totally believe that IBM could stay true to the vision of providing a stripped-down server. I figured it would be hard for them to resist the urge to add on additional hardware features that might seem "nice to have", but which would pump up the cost, size, and power draw. I was wrong on this score — they successfully delivered what they promised.

My second thought was, "It's dense, cheap to buy and operate, and you're looking to sell it in big chunks, why not target HPC too? It's a good fit."

They didn't agree with my reasoning, at least initially, but the market said differently, and iDataplex has become a standard IBM HPC offering. ®

Bootnote

I'm still working on making the term "GPU-riffic" a part of the industry lexicon. Anyone who is supporting "GPU-tastic" as an alternative should reconsider their stance.