Original URL: http://www.theregister.co.uk/2009/06/10/hp_cookie_sheet_servers/

HP serves up cookie sheet servers

The lighter way to enjoy data

By Timothy Prickett Morgan

Posted in Servers, 10th June 2009 15:30 GMT

Hewlett-Packard has launched its own variant on the "cookie sheet" minimalist server design championed by Google and imitated by commercial server makers.

Taking out weight and cost in servers among hyperscale data center operators is as important as power conservation and performance, which is one of the things that makes these upper-crust customers different from even large commercial server buyers, and worlds away from small and medium businesses. But hyperscale shops buy tens or hundreds of thousands of servers a year, which is the main reason why HP is getting in on the act.

The new ProLiant SL series of cookie sheet machines follows fast on the heels of the uber-dense ProLiant DL rack-based servers that were announced last week. The DL1000 rack server allows one, two, or four two-socket servers using quad-core "Nehalem EP" Xeon 5500 processors to be put into a 2U form factor - the kind of space that normally would hold just two sockets. The SL6000 Scalable System, as the cookie sheet machines are called, offer the same kinds of densities as the DL1000s, but instead of wrapping server nodes in a lot of metal, they are put on trays that slide into 2U rack-mounted cases.

Back in October 2008 when it was just Rackable Systems, the new (and presumably improved) Silicon Graphics created its own cookie sheet designs, called the CloudRacks, which slid trays of servers and storage into 22U or 44U racks instead of sliding them into 2U or 4U chasses that in turn mounted into racks. Why HP didn't cut even more metal out of the design and think at the rack level is unclear, but it looks like it wanted to create a cookie sheet design that used standard racks (either made by HP or by others) instead of forcing buyers to get its own racks.

What is clear is that the SL6000 machines are more squarely aimed at HP's need to sell more boxes into hyperscale accounts. According to John Gromala, director of marketing for the company's Industry Standard Servers division (that's the one that sells ProLiant rack and tower and BladeSystem blade servers) within its Enterprise Storage and Servers group, says that compared to traditional rack servers, the SL6000 setups use shared and tuned power supplies and cooling fans for specific server node configurations, allowing for up to 28 per cent less power consumption per node compared to regular HP ProLiant rack servers. And by carving out about 31 per cent of the weight, hyperscale data centers will pay less to ship the servers they buy to their facilities and will be able to pack the machines in tighter.

Not only are data centers that were designed in decades gone by not usually able to handle the kinds of power densities that modern data centers can do, they also can't handle the extra heavy weight. And, significantly, the SL6000s will cost about 10 per cent less than equivalent ProLiant server nodes, and that is before the volume discounts kick in.

The ProLiant SL6000 line consists of the z6000 chassis, which is a 2U box that the SL series server nodes (they are not blades in as much as they do not share a common midplane for systems management or share peripherals) slide into. Hyperscale customers are big on having server nodes and all their I/O accessible from the front, usually so they can put racks back to back and thereby make the most of the square footage of their data centers (otherwise, they have to leave room to service the racks from the back). Most commercial servers have hot plug disks in the front, which helps, but the internals of the machine are definitely not easily accessible and all of the networking is in the back.

The three SL series server nodes that slide into the z6000 chassis fix that. All of these server nodes are based on the Xeon 5500 processors, just like the DL1000 dense machines, but all of the cabling for the server nodes comes out of the front of the box. Disk drives are mounted on the servers and do not come in hot plug slots because, as Gromala explains it, hyperscale customers run distributed workloads and tend to replace an entire server node all at once, including processors, memory, and storage.

Saving here, saving there

The first SL6000 server node is the SL160z, which is a server tray that takes up 1U of space horizontally in the z6000 chassis and which includes one server node that has 18 DDR3 memory slots to support the maximum of 144 GB of memory available for Nehalem EP servers using Intel's "Tylersburg" 5520 chipset. (Cisco Systems, as you know, will this month ship blade servers based on the Nehalem EPs that have a homegrown memory expansion ASIC that boosts capacity to 384 GB for a two-socket blade.)

The SL160z has room for two 3.5-inch SATA or SAS disks. The SL170 uses a half-width motherboard that has its memory crimped back to 16 slots (for 128 GB max) and room for six 3.5-inch disks on that 1U tray. The SL2x170 server tray has two half-width Nehalem EP server nodes, each with up to 128 GB of memory and one 3.5-inch disk. As you can see, hyperscale customers don't seem to be all that interested in the power savings that come from 2.5-inch SATA or SAS disks, or else HP would be putting them in the ProLiant SL server nodes. (This strikes me as odd, but these customers are probably more interested in raw capacity, dollars per I/O, and dollars per GB than anything else when it comes to local disk storage on their server nodes.)

Gromala would not comment on when or if HP might deliver ProLiant SL machines based on Advanced Micro Devices Opteron line of processors, but it seems likely that it will eventually do this, particularly if the Opterons can demonstrate performance or price/performance benefits compared to Nehalem boxes.

All of these ProLiant SL machines will be available in July; pricing for individual parts of the boxes has not yet been announced.

By HP's math, the shift from standard rack servers to the SL iron can result in significant savings. Gromala did some calculations on the back of an envelope for a 100,000 square foot data center and reckons that 88,032 server nodes could be crammed into that space putting four SL nodes in a z6000 chassis and putting 1,048 racks into that space. By going dense and using the SL nodes, HP reckons a hyperscale data center operator could save $14.5m on server acquisition costs.

Those servers would use 170 megawatt-hours per year less of electricity thanks to the shared power and cooling inside the z6000 chassis, and that translates into another $13m in savings. And in terms of weight savings, using the SL designs means chopping out 838.5 tons (US, not metric) of weight, which adds up on the shipping bill and which means data centers can be a little less rugged. This saves money, too.

As usual, HP has a slew of polysyllabic services that go along with the new iron, such as the Data Center Environmental Edge collection of services for implementing the HP Extreme Scale-Out (ExSO) portfolio. Basically, HP will be recommending that customers deploy DL1000 or SL6000 machines to boost density or save money, or both. ®