IBM shakes up the server game with lean, cool iDataPlex
Freaking rivals 10,000 units at a time
It is with some measure of awe that we introduce you to IBM's iDataPlex server.
The system itself is quite remarkable. IBM has reworked its approach to rack servers allowing it to place twice as many systems in a single cabinet. This attack centers on delivering the most horsepower possible in a given area while also reducing power consumption. IBM hopes the iDataPlex unit will attract today's service providers buying thousands and tens of thousands of servers and also big businesses such as oil and gas firms and media companies that will also possibly pursue a grid-ish data center computing model pioneered to some degree by Google.
But the really awe inspiring bit of iDataPlex comes from the fact that IBM is willing to go after this market at all and that it did so without screwing up the hardware design. That's not at all to say that IBM lacks engineering know how. Obviously, that's not the case. Rather, it's that IBM has a tendency to try and cram higher-end technology into simple designs, eroding their initial magic.
IBM started work on iDataPlex 18 months ago, answering a call directly from CEO Sam "Why won't my PRs let me have an interview with El Reg" Palmisano. The company wanted to create a fresh take on server computing that would show dramatic energy savings.
Gregg McKnight, a distinguished engineer at IBM, led the work around the new system and went right after today's rather long and flat pizza box style rack servers. His team decided that it made little sense to drag cool air over the whole length of these servers, since that results in components near the front of the server staying cool while those at the back get warm. In addition, you end up pumping tons of hot air off the back of a rack.
With iDataPlex, IBM relies on what we've come to think of as half-height servers. It's then combined two racks-worth of those systems side-by-side into a single system that's about 24 inches deep as opposed to 48 inches deep with standard systems.
All of the nodes inside this system ship in a 2U chassis with its own power supply and fans. Customers then slide a pair of two-socket motheboards into the chassis "like cookie sheets into an oven," McKnight told us. This approach lets IBM share fans across the systems and to use larger fans.
"Each motherboard is also independently serviceable, and you can pull one out while the other is still running," McKnight added.
At launch in July, IBM will offer up a variety of Xeon chips as options for the motherboards. It may do Opterons chips if there's "customer demand" and may, of course, head to Power country one day.
In total, IBM will let customers pick from 22 different motherboard designs that have various processor, PCI slot, memory and storage options. You can, for example, go with low-voltage Xeons and lower-end memory and disks for serious energy savings – just as Google does with its own boxes today.
IBM's rack design also cuts out some of the sheet metal and other waste associated with running two racks next to each other in a data center. As a result, you have about 100U of space to play with across the combined rack, which leaves 84U for the servers and 16U for switches and power distribution units.
And, since the cool-running system is half as deep as regular racks, you can slap it up next to a wall rather than wasting space in the middle of a data center. In addition, IBM believes the design affords customers more flexibility with rearranging their data centers for cooling efficiency.
How does iDataPlex stack up then?
Well, IBM compared its unit to leading energy efficient boxes and reckons that it uses 40 per cent less power than typical 1U servers. It also thinks it can sell these systems at 20-25 per cent less than normal 1U servers due to the volume approach.
If you really want to get serious, IBM offers a Rear Door Heat eXchanger for iDataPlex that costs about $16,000. You can use this to funnel water or specialized coolants through the iDataPlex unit.
In one test case, IBM monitored a rack in a 74F room, pumping 138F air out of the rear of the rack. After turning on the eXchanger, the temperature dropped to 60F in 20 seconds, and the system actually started to cool the data center.
Looking ahead, IBM also plans to start selling data centers in containers just like Sun Microsystems and Rackable and will make iDataPlex the main part of that attack.
And now it's time to have a look at where this system stacks up against the competition.
I have to agree that this looks like some interesting craftsmanship and some elegant engineering.
However, history is riddled with elegant engineering (Concorde).
I expect that IBM will sell quite a few of these. After all, loyal customers will buy their vendors' products, even if not optimal. Never discount the effectiveness of persuasion of their sales force too. And then there's always the outsourced, government and hosted business. I expect that IBM will find a way to plop a few in there.
HPCC doesn't count - most of those are good publicity but don't make any money for anyone.
But will it be more efficient than alternative designs? Will it offer a class leading measurement around power and cooling, versus bring a Supermicro double server rack full and popping it inside a self-enclosed Liebert rack?
If it's more expensive over 3,4,5 years for acquisition + operating costs versus a competitor's rack-dense or blades, why would anyone want it?
[coat, because if this thing is as cool as it's claimed, I may need to wrap up warm in the data centre]
RE: RE: El G
Sorry, may have been a little rash there. Just that there were a number of quite harsh comments being bandied around, and it was nice to see an engineer showing pride in their handy work.
@Matt - Sorry, kinda skimmed the comments and mixed you up with the AC above. Still an inexcusable snipe by moi, though :(
RE: El G
Erm... I said it was interesting engineering, congrats to Mr McKnight on what must have been a challenging design brief, but what I said was I wasn't sure of the business model when compared to standard blades with virtualisation. And as to "Go lick a HTC smartphone", I've used those and prefer a Blackberry, thanks!
Besides, shouldn't we have the Sunshine crowd jumping up and down telling us T1/T2 are the green kings of webserving, not mini racks (actually, Sun do have a good point on the webserving bit)? ;)
And only forty patents? What, were the IBM patent trolls on holiday? Most new IBM kit seems to come out with at least 300 patents pending, etc! :)