Original URL: https://www.theregister.com/2009/03/18/rackable_cloudrack_two/

Rackable shrinks CloudRack cookie sheets

Pre-heat data center to 104 degrees...

By Timothy Prickett Morgan

Posted in Channel, 18th March 2009 04:02 GMT

Rackable Systems, one of the handful of niche server makers out there on the cutting edge of compute density and energy efficiency, will today upgrade its relatively new CloudRack line of "cookie sheet" servers.

The original CloudRacks debuted back in October 2008. Rather than putting metal enclosures around the horizontal blades, as Rackable has done in the past, the company just plunked down components (motherboards, power supplies, disk drives) in a topless fashion on top of a metal sheet.

As the company's name suggests, Rackable doesn't do the blade server chassis like IBM, Hewlett-Packard, and Dell. It thinks at the rack level and does horizontal blades. With the CloudRacks announced last year, the company launch 22U and 44U racks that were 26-inches wide and had 1U server and storage drawers. Each rack came with two (22U) or four (44U) large and efficient axial fans that cool the whole shebang, and each supported EATX or Mini-SSI motherboards using a variety of Xeon and Opteron processors. Each tray came with a 250-watt power supply and up to eight 3.5-inch SATA disk drives (2.5-inch SATA drives are also supported in some configurations).

With the CloudRack2 servers announced today, the rack has been shrunk to 24-inches in width and made a little taller at 23U and 46U. The new width, says Saeed Atashie, director of server products at Rackable, fits better with the standard floor tile size in a data center. (The racks are 40 inches deep).

This time around, the trays don't include the power supply, which has been shifted out into the rack enclosure itself and which provides direct conversion from three-phase AC power coming out of the data center walls to 12V power needed by the servers on the tray. So, the "server" doesn't have a cover, doesn't have any fans, and doesn't have a power supply.

By moving the power supplies off each tray, Rackable can now cram three servers based on Intel's impending "Nehalem EP" Xeon processors instead onto a tray (rather than two), and Atashie says that the company will soon be able to rejigger the components on the tray to get four whole Xeon servers on a tray, along with disks.

Eventually, Rackable will support anything from Pico-ITX to EATX motherboards on the trays, using a variety of processors. The company is working on a system that will use Intel's Atom embedded processors, which will be sold as a variant in the MicroSlice servers. (The MicroSlice line of trays was announced in late January using Mini-ITX and Micro-ATX motherboards).

The CloudRack C2 cabinet is something that Rackable is hoping will get some business flowing. The 46U rack gets rid of the axial fans and uses arrays of two-deep, three-wide fans. 14 rows of these fans cover most of the back of the rack. On a normal rack of servers, the many fans used in the box can consume anywhere from 5,000 to 5,300 watts of juice, or roughly 25 per cent of the power that goes into a rack. But with the CloudRack 2 machines, Rackable says it can drop the juice that fans use to 8 per cent of input rack power.

On the right side of the CloudRack 2 cabinet is a set of six redundant, hot-swap power rectifiers and three ports for three-phase AC input. The setup Rackable has created can deliver 99 per cent efficiency between the input AC power and the 12V DC power going into the server. This is great, but it has other side effects. Fans are not creating electric noise that makes local powers inside the servers work more inefficiently.

The 104 degree data center

More importantly, at super-dense, hyperscale data centers, by taking out the power supplies and doing very efficient and direct conversion of AC power to DC needed by the motherboard and disks, Rackable says that customers can run these units at 40 degrees Celsius (or 104 degrees Fahrenheit). Most server gear is rated at a peak of 35 degrees C, and that is mostly because of the sensitivity of disk drives (and their whirring parts) to heat.

Atashie ventures that companies putting in solid state drives instead of disk drives could run the boxes even hotter, but says the company hasn't tested this yet in its labs. The important thing is that by letting the CloudRacks run hotter, that means the data center air conditioning doesn't have to be cranked up to the level where you need a jacket.

With four 2.5-inch drives per system, a CloudRack2 setup can cram as many as 1,280 cores in a 46U rack, which works out to 32 cores per 1U of space (that's 80 server mobos in total). With the MicroSlice Mini-ITX boards, the CloudRack 2 racks can cram up to 240 servers into that 46U rack, for a total of 480 cores. (That's only 12 cores per 1U of rack space, but this setup is designed to maximize the count of servers, not the cores). This kind of density is as good or better than anything IBM, HP, and Dell can put into the field with their best blades.

Well, at least until they launch new products over the coming weeks.

Rackable builds its systems to order on a customer-by-customer basis, and it does not provide list prices. The former is fine, but the latter is a bad business practice as far as I can tell. ®