This article is more than 1 year old

HP ProLiant SL270s Gen8 beast masters eight coprocessors

Nvidia Tesla, Intel Xeon Phi – do my bidding math

If you have decided that you want to build a grunting number-cruncher that crams a hefty number of Tesla or Xeon Phi coprocessors into a single chassis, then Hewlett-Packard wants to talk to you about the new ProLiant SL270s Gen8 server.

The new SL270s Gen8 slides into the SL6500 "scalable system" modular chassis, which makes it somewhere in between a rack and a blade server. The SL6500 chassis comes in 2U and 4U variants, and is distinct from the "Argos" SL4500 big data scalable systems that HP announced last week, which are targeted at big data jobs that need to put lots of storage on a node. With the SL6500s, the design is about packing as many coprocessors in a chassis as possible to maximize floating point computing per watt.

HP has been peddling super-dense ProLiant SL machines since June 2009, with the launch of the initial SL6000s. These machines were a reaction in part to the success that Dell's Data Center Solutions unit and Silicon Graphics/Rackable Systems were having peddling dense machines without all the expensive management and redundancy features that commercial blade servers have, and which are legs on a snake for cheapskate hyperscale data center operators and supercomputer centers. When you have a massively parallel machine, you expect nodes to fail rather than trying to prevent it, and your software is supposed to cope with those failures without crashing.

HP debuted the much more impressive SL6500 systems in October 2010, which allowed for up to three GPUs to be configured in a half-wide 2U server node called the SL390s G7. (There is a 1U variant of this node, too.) The SL6500 chassis is 4U high, so you could put four of these 2U SL370s G7 nodes into it. Doing so gave you a maximum of eight Xeon 5600 processors and a dozen Tesla M20X0 cards crammed into a 4U space.

To boost the GPU coprocessor density even further, HP rolled out a new 4U high variant of the SL390s G7 node in April 2011 that was similarly half-wide and allowed a wonking eight GPU coprocessors to be configured to a two-socket Xeon 5600 server. So with two of these darlings, you could put four Xeons and sixteen Tesla M20X0 GPU coprocessors in that 4U of space. Well, you could if you had enough money for electricity.

The Xeon-based SL270s Gen8 coprocessor-lovin' node

The Xeon E5-based SL270s Gen8
coprocessor-lovin' server

With the latest machine to come out of HP in the scalable systems lineup, the SL6500 chassis stays the same, but there is a new SL270s Gen8 node that is based on Intel's Xeon E5-2600 processors. This node has room for the eight x16 peripheral cards, and this time around they can be the new Xeon Phi x86-based coprocessors and the new Tesla K10 GPU coprocessors from Nvidia.

The spec sheet for the new SL server node says that it supports the Tesla M2070Q, M2075, M2090, and K10 GPU coprocessor cards and notes that you need to have a minimum of 4GB of main memory per socket if you install GPU coprocessors in the node. It also says that on those GPU coprocessors that have a 250 watt power draw, you have to clock them down to 225 watts if you have seven or eight cards in the node. If you only put six in, you can run these older cards at their full 250 watts.

The spec sheet does not mention the Xeon Phi x86 coprocessor, but HP's announcement for the server node says that the Xeon Phi will be an option early next year.

To further clarify, HP tells El Reg that it will not support any actively cooled coprocessors, which means those models with their own fans. So that means the Xeon Phi 3120, which has a 300 watt power envelope and a fan on the side like a graphics card is not going to be supported. This is the cheaper $2,000 model. But the passively cooled Xeon Phi 5110P, which sits in a 225 watt thermal envelope and which costs $2,649, will be supported in the SL270s Gen8 node.

HP confirmed that both the Tesla K20 and K20X GPU coprocessors, which are both passively cooled, will be supported in the SL270s Gen8 node.

It would be more interesting, of course, if HP could support actively cooled GPUs inside the box, such as the Quadro K5000 graphics card from Nvidia, or the latest FirePro and Radeon cards from Advanced Micro Devices. At the very least, it would be interesting if AMD made passively cooled variants of the latest FirePro card, the FirePro S10000 double-GPU cards announced last week.

That FirePro S10000 is absolutely competitive with the Tesla K10 and K5000 cards in terms of raw single-precision computing, delivering 5.91 teraflops, and it is also able to handle 1.48 teraflops at double precision. The Tesla K10 has 4.58 teraflops SP, but only 0.19 teraflops DP and it cannot be used as a graphics card.

The K20 and K20X cards from Nvidia offer 1.17 and 1.31 teraflops at DP and 3.52 and 3.95 teraflops at SP, but again, cannot do graphics. Supporting actively cooled graphics cards would allow the server node to be used as the back-end for visualization walls.

The SL270s Gen8 node has eight PCI-Express x16 peripheral slots, plus one additional low-profile x8 slot and one x8 FlexibleLOM mezzanine card for linking the node out to Ethernet or InfiniBand networks. (LOM is short for LAN on motherboard.) The server node has eight DDR3 memory slots per socket and maxes out at 256GB of memory using 16GB memory sticks.

The Xeon E5-2600 processors with 60, 80, 95, and 115 watt thermal envelopes are supported in the node; if you want to use 130 watt parts, HP says to talk to someone first. The machine has a base dual-port Gigabit Ethernet welded onto the mobo, plus that FlexibleLOM slot. The unit has room for eight 2.5-inch disks for local node storage.

The SL270s Gen8 server will ship in December and will have a base list price of $6,166 for a machine with one eight-core Xeon E5-2660 spinning at 2.2GHz, 8GB of memory, and a Smart Array B320i disk controller. The machine is not yet in the online configurators at HP, so we can't gin up a proper configuration and tell you what it could cost. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like