Original URL: http://www.theregister.co.uk/2008/04/23/ibm_idataplex/

IBM shakes up the server game with lean, cool iDataPlex

Freaking rivals 10,000 units at a time

By Ashlee Vance

Posted in Servers, 23rd April 2008 15:08 GMT

It is with some measure of awe that we introduce you to IBM's iDataPlex server.

The system itself is quite remarkable. IBM has reworked its approach to rack servers allowing it to place twice as many systems in a single cabinet. This attack centers on delivering the most horsepower possible in a given area while also reducing power consumption. IBM hopes the iDataPlex unit will attract today's service providers buying thousands and tens of thousands of servers and also big businesses such as oil and gas firms and media companies that will also possibly pursue a grid-ish data center computing model pioneered to some degree by Google.

But the really awe inspiring bit of iDataPlex comes from the fact that IBM is willing to go after this market at all and that it did so without screwing up the hardware design. That's not at all to say that IBM lacks engineering know how. Obviously, that's not the case. Rather, it's that IBM has a tendency to try and cram higher-end technology into simple designs, eroding their initial magic.

IBM started work on iDataPlex 18 months ago, answering a call directly from CEO Sam "Why won't my PRs let me have an interview with El Reg" Palmisano. The company wanted to create a fresh take on server computing that would show dramatic energy savings.

Gregg McKnight, a distinguished engineer at IBM, led the work around the new system and went right after today's rather long and flat pizza box style rack servers. His team decided that it made little sense to drag cool air over the whole length of these servers, since that results in components near the front of the server staying cool while those at the back get warm. In addition, you end up pumping tons of hot air off the back of a rack.

Shot of the iDataPlex system

iDataPlex

With iDataPlex, IBM relies on what we've come to think of as half-height servers. It's then combined two racks-worth of those systems side-by-side into a single system that's about 24 inches deep as opposed to 48 inches deep with standard systems.

All of the nodes inside this system ship in a 2U chassis with its own power supply and fans. Customers then slide a pair of two-socket motheboards into the chassis "like cookie sheets into an oven," McKnight told us. This approach lets IBM share fans across the systems and to use larger fans.

"Each motherboard is also independently serviceable, and you can pull one out while the other is still running," McKnight added.

At launch in July, IBM will offer up a variety of Xeon chips as options for the motherboards. It may do Opterons chips if there's "customer demand" and may, of course, head to Power country one day.

In total, IBM will let customers pick from 22 different motherboard designs that have various processor, PCI slot, memory and storage options. You can, for example, go with low-voltage Xeons and lower-end memory and disks for serious energy savings – just as Google does with its own boxes today.

IBM's rack design also cuts out some of the sheet metal and other waste associated with running two racks next to each other in a data center. As a result, you have about 100U of space to play with across the combined rack, which leaves 84U for the servers and 16U for switches and power distribution units.

And, since the cool-running system is half as deep as regular racks, you can slap it up next to a wall rather than wasting space in the middle of a data center. In addition, IBM believes the design affords customers more flexibility with rearranging their data centers for cooling efficiency.

How does iDataPlex stack up then?

Well, IBM compared its unit to leading energy efficient boxes and reckons that it uses 40 per cent less power than typical 1U servers. It also thinks it can sell these systems at 20-25 per cent less than normal 1U servers due to the volume approach.

If you really want to get serious, IBM offers a Rear Door Heat eXchanger for iDataPlex that costs about $16,000. You can use this to funnel water or specialized coolants through the iDataPlex unit.

In one test case, IBM monitored a rack in a 74F room, pumping 138F air out of the rear of the rack. After turning on the eXchanger, the temperature dropped to 60F in 20 seconds, and the system actually started to cool the data center.

Looking ahead, IBM also plans to start selling data centers in containers just like Sun Microsystems and Rackable and will make iDataPlex the main part of that attack.

And now it's time to have a look at where this system stacks up against the competition.

This iDataPlex box is in many ways a retaliation against systems put out today by the likes of Rackable, Supermicro and other white box makers. It's also a shot against Intel, which crafts custom, low-power motherboards for Google.

All of these companies have created compact, energy-efficient gear that appeals to service providers desperate to lower their hardware costs and power bills.

Companies such as RLX pioneered this push about eight years ago by pumping out blade servers that, for example, ran on Transmeta's laptop chips. Sadly, once the big boys got a hold of blades, they ran away from the low-power compact systems and toward fat, speedy boxes. So, this left a vacuum of sorts where companies such as Rackable could step in and cater to service providers.

Now, we're seeing IBM, Sun and Dell, which will build custom gear for service providers, realign their gear, and IBM appears to have the most dramatic take to date.

"The first blade systems were introduced by scrappy upstarts who defined the customer need, built the market, then got crushed by the big players who came in with faster product cycles, more engineering, and lower prices," Dan Olds, head of the Gabriel Consulting Group, told us. "The same thing is happening now - and the iDataPlex from IBM is the first Tier 1 system to aim directly at the large-scale web infrastructure market - much to the detriment of smaller players like Rackable.

"The key thing to keep in mind about this thing is that it's aimed squarely at the Web 2.0 market - vast number of servers, all running a small number of identical apps. What these guys really care about is TCO and ROI – what it's going to cost to buy and to run their gear. IBM hit the target with iDataPlex. It's extremely dense (less floor space), very power efficient, and runs cool - particularly with the rear door heat exchanger.

"Some of their competitors will respond with their version of blades, and thus miss the market entirely. Even though blades fit well into enterprise data centers, they aren't attractive to the web crowd at all , as they're too power hungry and too expensive."

You might think the service provider market rather niche and a low-margin battle not worth pursuing. IBM, however, seems convinced that more and more companies will do the software work needed to take advantage of this type of system.

"You look at something like blades with RocketLogix(RLX) and see that that was niche and then it became mainstream," McKnight said. "We are seeing that same model replaying now.

"If a customer is willing to write their application to tolerate hardware failures and to do without things like redundant power and cooling and RAID to prevent failures, then we can remove a lot of cost and energy and present a lower cost of computing."

IBM expects many of the business high performance computing customers to rework their software in this way and more or less take their clusters to the next level.

Shot showing a red hot normal systems versus cool blue idataplex box

One of these is hotter than the other

If iDataPlex lives up to its billing, then IBM will certainly appear to have caught the likes of Sun, HP and Dell off guard.

To its credit, Sun pushed the container model early, and IBM admitted that customers are demanding these types of boxes. Sun also has very compact, memory-rich designs, but we've yet to see something that equates to iDataPlex. Meanwhile, Dell will sell you a bespoke motherboard with low-power chips and memory but certainly not a double-stuffed, double-rack. And, over at HP, we find the company concentrating on improving data center design through various cooling systems but not really shipping any of this new service provider-friendly gear. (As we understand it, HP has partnered with Rackable on a number of deals in the past.)

We're still not convinced about the long-term prospects of these vendors beating each other up for lower and lower margins as the data center build out continues. The prize for winning this contest seems to just be a gutted business unless you can convince these customers to shell out for software and services. Sadly for the vendors, most of the customers seem happy to do a lot of open source work on their own.

But the journey to guttation should be an interesting one, and iDataPlex has set us on our way. For that, we'll forgive IBM for the product's name. ®