Original URL: https://www.theregister.com/2012/09/19/dell_zeus_c8000_hyperscale_server/

Dell bends shiny server linings for denser clouds

Zeus chassis, lightning not included

By Timothy Prickett Morgan

Posted in Systems, 19th September 2012 17:18 GMT

You can only cram so much stuff into a chassis that is two rack units high, and so server maker Dell is shifting to a 4U chassis for its latest "Zeus" PowerEdge C8000 design. The new Zeus chassis is designed from the ground up to pack more CPUs, coprocessors, and storage into a 4U space than Dell was able to get into two 2U C6220 enclosures.

Not that Dell is going to stop peddling the C6220, which pre-launched back in February ahead of Intel's Xeon E5 processors for two-socket servers.

Armando Acosta, product manager for Dell's PowerEdge C cloudy infrastructure server line, tells El Reg that the C6220s are "selling really well" and "still have their place," but concedes that there will be some cannibalization with the C8000. The reason is density and proximity of storage and GPU and x86 coprocessors within the chassis.

With the C6220s and prior enclosures aimed at hyperscale data centers, Dell crammed four server nodes into a chassis and made them share power and cooling, but not integrated switching as blade server enclosures do. There is a fair amount of storage per enclosure, with the C6220 sporting a dozen 3.5-inch or two dozen 2.5-inch drives across the front.

As for coprocessors, Dell had an outboard PowerEdge C410x, which crams sixteen PCI-Express slots into a 3U chassis. All told, you could do eight server nodes with two GPU coprocessors each (and soon Xeon Phi x86 coprocessors) in 7U of rack space.

Dell's C8000 chassis spitting out sleds

Dell's C8000 chassis spitting out sleds

There are a few problems with this approach, which the C8000 fixes.

First, Hadoop data-munching clusters are craving ever larger numbers of spindles per node. Only a few years ago, six 1TB drives did the trick, but that doubled to a dozen drives per node, and last year cutting-edge Hadoop shops wanted 24 drives per node, according to Acosta.

Here in 2012, with ever-increasing core counts in Xeon and Opteron processors, some Hadoop shops want 36 or 40 drives per two-socket node. You cannot easily cram that ratio of servers and storage into the existing C6220 chassis. And when it comes to GPU coprocessors, you don't necessarily want a PCI-Express switch in the middle or GPUs sharing PCI-Express capacity. You want a server node and its GPUs right next to each other, just like you want disks in the enclosure to be directly attached to a particular server node.

The new Zeus chassis, developed in conjunction with the Texas Advanced Computing Center at the University of Texas (where Michael Dell's dorm room was when he started the company), is meant to be a more flexible design that more tightly couples compute, coprocessing, and storage while allowing slightly higher densities than prior machines. It's based on the "Scorpion" chassis that Dell's Data Center Solutions custom server unit cooked up for an unnamed customer, but with some tweaks.

The C8000 is the chassis that TACC will be using in its future "Stampede" hybrid supercomputer cluster, announced a year ago and set to be installed in early 2013. Stampede is the first publicly announced machine that will get a dominant portion of its compute capacity from the Xeon Phi x86-based parallel coprocessors.

Exactly how much extra oomph is unclear, but the machine will mix Xeon E5-2600 processors and Xeon Phi and Nvidia Tesla K20 GPU coprocessors to reach its initial 10 petaflops performance level.

Front view of the C8000 server chassis

Front view of the C8000 server chassis

The C8000 chassis holds eight single-wide compute sleds plus two single-wide power supply sleds, each with two 1,400 watt power supplies running at 94 per cent efficiency. But in December, according to Acosta, Dell will allow customers to yank the power supplies out of the box, freeing up two more slots for compute, coprocessors, or storage.

All sleds in the C8000 chassis will be juiced with from an external 3U, half-depth chassis that can feed multiple C8000 enclosures. This power chassis will sit behind half-depth Force10 Networks and PowerConnect rack switches, using up free space in the rack. By doing this, Dell will be able to get ten two-socket servers into 4Us of rack space, which is two more compute nodes than it could do with two C6220 2U enclosures, which top out at four compute nodes per enclosure.

The important thing for hyperscale customers is that the C8000s can run a little hot. Most rack and blade servers from a few years back were certified to run at 25°C (77°F), which was the "gold standard" for many years. The C8000 enclosure and its components are now rated at 35°C now (95°F), and will be rated at 45°C (115°F) later this year.

The base compute sled in the C8000 enclosure is the C8220, which is a two-socket computer based on the "Patsburg" C600 chipset. Dell is only supporting the Xeon E5-2600 variant in the C8000 chassis right now because the hyperscale customers it is chasing want the highest performance and are not as worried about the cost per flop or watt, which would mean looking at the Xeon E5-2400 alternative.

The Xeon E5-2400s are, of course, socket compatible with Xeon E5-2600s, so if customers come to Dell and say they want those chips, it's a possibility.

The Xeon E5-2600-based C8220 server sled

The Xeon E5-2600-based C8220 server sled

All of the Xeon E5-2600 SKUs, including the 130 and 135 watt parts, can be deployed in the C8220 sled, which sixteen DDR3 memory slots, and can support 4GB, 8GB, and 16GB memory sticks running at 1.07GHz, 1.33GHz, and 1.6GHz speeds. That's not a huge amount of memory, but for Hadoop and supercomputing workloads, it's sufficient. Xeon E5-2600 machines aimed at server virtualization tend to have 24 memory slots and support 32GB sticks, for a maximum capacity of 768GB of memory.

The sled has an integrated SATA controller on the C600 chipset, and you can snap in an optional LSI 9265-8i RAID controller if you want. The sled has room for two 2.5-inch drives, which snap on the sled itself and are not hot-pluggable from the front like disks used in blade and modular server designs often are. You have your choice of a 100GB SATA SSD or a 1TB 7.2K RPM SATA drive for on-sled storage.

The sled has two Gigabit Ethernet ports and one 100Mbit dedicated management port; it also has two PCI-Express 3.0 slots. There's one x8 custom mezzanine slot for optional QDR and FDR InfiniBand adapters from Mellanox Technologies or 10GbE adapters from Intel, plus another low-profile x16 slot for the peripheral of your choosing.

The hybrid CPU-GPU C8220X compute sled

The hybrid CPU-GPU C8220X compute sled

If you want to add coprocessor accelerators to your Xeon E5 nodes, then you want to take a look at the C8220X node. This a hinged double-wide sled that puts a two-socket E5-2600 node with two hot-plug 2.5-inch drives on one side of the hinge; it looks very much like the C8220 above. On the other side of the hinge, you can put storage or coprocessors (and presumably a mix of both).

The alternative side of the C8220X has room for eight 2.5-inch disks (SAS, SATA, or SATA SSD, ranging from 146GB to 1TB in size) or two Tesla, Xeon Phi, or FirePro coprocessors. You can also put in four 3.5-inch drives (SAS or SATA, with capacities ranging from 300GB to 2TB) into the other side of the hinge if you want fatter drives with more capacity. This server node has two PCI-Express 3.0 x8 slots plus the x8 mezz card slot.

The C8000XD storage sled

The C8000XD storage sled

If you want to add more storage to a C8000's server sleds, then get the C8000XD storage sled, which is a direct-attach storage cage that links directly to the sleds through LSI 9202 or 9280-e controllers, which slide into its warm and inviting PCI-Express slots.

The C8000XD is a double-wide sled that can house two dozen 2.5-inch or one dozen 3.5-inch drives. The dozen drive carriers come in a variant that puts two physical 2.5-inch units inside of the 3.5-inch carrier, which slide into the sled from the top.

If you want to go crazy with Hadoop, you can lash two of the C8000XD units to a C8220 sled and have 48 1TB SATA drives. Dell is also supporting Intel's 100GB eMLC and 160GB MLC SSDs in the storage sled and a variety of SAS and SATA drives in either form factor. The storage sled tops out at 36TB of capacity.

You no doubt noticed that Dell did not announce server sleds for the Zeus chassis based on AMD's Opteron 4200 or 6200 processors. Acosta says that the PowerEdge C6105 and C6145 machines, launched in September 2010 and February 2011, respectively, supporting the latest Opterons are "still doing well."

While making no promises at all, Acosta said the C8000 chassis could see some Opteron sleds – presumably if enough customers ask for them. "We want to support both AMD and Intel because we think competition is good for customers," says Acosta. Which is why there should already be an AMD option in the C8000 chassis, really.

The C8220 and C8220X server sleds are certified to run Microsoft Windows Server 2008 R2 SP1, SUSE Linux Enterprise Server 11 SP1, and Red Hat Enterprise Linux 6.0. Dell is working on certifications for FreeBSD Unix, the CentOS clone of RHEL, and Canonical's Ubuntu Server. Windows Server 2012 will be certified on the machines in December. On the hypervisor front, XenServer 5.6 from Citrix Systems, ESXi 5.0 from VMware, and Hyper-V 2008 R2 SP1 are all supported on the iron.

The C8000 iron is available now, and a base configuration with eight server sleds, each with a standard 95 watt Xeon part – not anywhere near the top bin at Intel – with 64GB of memory and two disks, will cost around $35,000. ®