Original URL: https://www.theregister.com/2008/10/30/rackable_cookie_sheet_servers/

Rackable does cookie sheet servers

A Google homage

By Timothy Prickett Morgan

Posted in Channel, 30th October 2008 20:32 GMT

Boutique data center server and storage maker Rackable Systems has unveiled an homage to Google's original home grown servers - which were essentially bare motherboards thrown on cookie sheets with rubber mats and stacked in bakery racks.

You can't charge a lot money for Google's server design, so Rackable's CloudRack racks and related storage trays look a little more rugged, quite a bit more organized, and certainly more professional - not that any of this matters to an upstart, always-right company like Google.

The CloudRack rack and tray design is not just about creating servers that are cheaper than standard rack-mounted or blade servers, in as much as they have a lot less metal in them. (Thanks to the formerly exploding and still rapidly growing Chinese economy, metals of just about every kind are increasingly expensive).

Taking the metal skins off the servers not only saves money, it decreases the weight of the rack of servers (meaning data center floor strain and human back strain is lower) and makes the gear easier to cool (since there is no metal obstructing air flow across the machinery).

The CloudRack setup is also about making servers more serviceable. (If you have never tried to slide out a dust-encrusted, wire-entangled server from a rack and then open it to fix it, you can't appreciate you much of a pain in the neck this is). And for companies with thousands or tens of thousands of servers, serviceability is a big deal because someone has to run around and fix broken components, and this takes time, and time is money, especially when the time is related to human beings. In a funny way, as Google has discovered in so many ways, less is more.

Rackable is obviously targeting the same cloudy customers with the CloudRack - meaning massively scaled out server infrastructures with either scientific, data warehousing, or Web 2.0 workloads. That's what I've been doing with its other rack designs. And it is not clear if the new CloudRack machinery is more or less expensive than prior designs either, because Rackable hides behind the fact that its setups tend to be heavily customized as an excuse for not providing list pricing for its components. (Which is obviously silly but maybe necessary for Rackable to sell against the tier one, general purpose rack and blade server makers like IBM, Hewlett-Packard, Dell, and Sun Microsystems).

The presumption any customer should take into the deal is that the CloudRack racks should be about the same price as prior racks from Rackable and that the server trays should be cheaper than rack designs of equivalent computing power at equivalent density, because the metal is gone. If Rackable doesn't agree with that assessment, IBM has dense iDataPlex gear, HP has two-server blades for its c7000 chassis, and Dell is thrilled to bring in its Data Center Solutions unit to create custom-made servers and data center designs for you.

The CloudRack comes in 22U or 44U sizes and a tray that slides into in the rack takes up 1U of space, just like a regular rack server. Rather than putting tiny muffin fans and power supply fans in each tray, the rack itself two or four highly efficient axial fans that span the width of the servers.

A large fan moves air much more efficiently and quietly than a collection of smaller fans (often put in series two or three deep and arrays many fans wide) that moves the same volume of air. Saeed Atashie, director of server products at Rackable, says a half-rack of standard 1U servers has something on the order of 200 muffin and power supply fans, and these are replaced with two large axial fans in the 22U CloudRack.

Making the rack be the only skin for a collection of servers and therefore having the large and efficient cooling fans be the only things moving air is obviously simpler. And the heritage of rack servers being an offshoot of tower servers (which do need a skin and their own cooling) is the only explanation as to why rack servers have had metal skins, lots of fans, and dedicated power supplies for all these years.

The decade long wait for toplessness

That it has taken a decade for server vendors to go topless inside the racks is a bit of a mystery, but we have to take progress where we get it. It must be far easier to ship a rack server that is encased in steel, since a server's delicate parts are not exposed, but the 50 per cent reduction in rack weight that the CloudRack line offers compared to standard rack servers also results in something else: lower shipping costs. And the 20 per cent or so lower power usage compared to standard 2U rack servers with two-sockets per server is also important to prospective clients.

The CloudRack trays can be equipped with two different kinds of servers, and two different processor options for each. One motherboard is based on a standard EATX (12 inches by 13 inches) form factor. This EATX board comes in one flavor that supports Advanced Micro Devices' dual-core Opteron 2200 HE or quad-core Opteron 2300 HE processors, which have a 68-watt thermal design point, and another one that is based on Intel's dual-core Xeon 5200 LV or quad-core 5400 LV processors (which are low-voltage 50-watt parts).

These EATX boards support up to 128 GB of main memory and have dual Gigabit Ethernet ports. Or, for maximum core density, Rackable can put two 7-inch by 13-inch "Mini-SSI" motherboards (that term is not an industry term yet, but rather one Rackable has made up) that come in Xeon or Opteron flavors. The Mini-SSI boards support the same Xeon LV or Opteron HE processors and offer up to 48 GB of memory with the Xeons and up to 64 GB with the Opterons.

The trays also come with local storage for the servers, up to eight 3.5-inch drives using either EATX or Mini-SSI boards, and a 250-watt power supply that is rated at 92.5 per cent efficiency. The trays have been designed to support future "Nehalem" Xeons from Intel in the first quarter of 2009 and future "Shanghai" Opterons in the second quarter of 2009.

The CloudRack chassis is called the CR1000, and the Xeon trays have been given the TR1000-SC1 moniker while the Opteron trays are called the TR1000-F1.

Using the Mini-SSI boards and quad-core processors, Rackable can get 704 cores and 352 TB of disks (using 1 TB drives) in a 44U rack. (That's 2U higher than the 42U standard rack, but you were only drying fruit and wet laundry up there anyway.) That's 16 cores per rack unit and 8 TB per rack unit. Other designs may meet or even beat the core count, but they fall well short of local storage on the blades, and if customers need to add storage modules, then they lose cores.

For instance, the two-server ProLiant BL2x220c blade from HP can jam 26 cores per rack unit (1,024 cores in a 42U rack), but only offers 400 MB of disk per rack unit and only 16 GB of memory per node. If you take half the blades out to add storage blades, then the CPU density per rack unit is lower than with the CloudRacks. IBM's iDataPlex design, with 16 cores and 2 TB per rack unit, comes closer to the CloudRacks in terms of balance.

Atashie says that all of Rackable's major customers are evaluating the CloudRacks now in their data centers - not just looking over the specs, but putting them through the paces. He would not, however, name names. The CloudRack and associated trays are available now, and as I said above, pricing is whatever. ®