Dell crams two four-way Opterons into cloudy server
First up with clock-cranked 'Magny-Cours'
Are you a hyperscale data center and supercomputer shops looking to cram lots of x64 cores in a tight space while having plenty of PCI-Express peripheral expansion? Take a good look at the new PowerEdge C6145, announced today by Dell.
The PowerEdge C6145 is one in a growing line of bespoke cloudy boxes that Dell sort of mainstreamed last March. Dell's Data Center Solutions unit, which creates custom servers for Facebook, Microsoft, and a few dozen other big hyperscale data center customers as well as startups such as game hoster OnLive, is the shining star of Dell's overall server business. Some customers need the benefits of these custom boxes, but they don't want to pay for a DCS engagement, and that is why Dell created the PowerEdge-C family: to span the gap between general purpose PowerEdge machines and custom DCS gear.
Like many DCS designs, the PowerEdge C6145 is based on Advanced Micro Devices' Opteron processors. In this case, the C6145 is a 2U server that puts two quad-socket G34 motherboards supporting the "Magny-Cours" Opteron 6100s into that chassis, one stacked atop the other. The 2U chassis has two hot-plug power supplies rated at 1,100 or 1,400 watts that are shared by the motherboards, and is based on AMD's SR5690/SP5100 chipset. The machine supports the Opteron 6100s launched last March as well as the faster versions announced today.
Tim Carroll, director of Dell's HPC business unit, says that the C6145 leverages the same motherboard that was used in the PowerEdge R815 rack server, which made its debut last April. Unlike the prior cloudy PowerEdge boxes, which had a single PCI-Express x16 slot and an x8 on a riser card, the C6145 has three x16 slots on a riser, plus one dedicated x16 host interface card slot and an x8 mezzanine daughter card on each node inside the box. So you can pipe in a total of eight PCI-Express x16 peripheral links back into the C6145 chassis.
This will be extremely useful for companies that want to attach lots of storage or networking to server nodes in dense configurations, or those who want to cram in a lot of cores into a box and lash them to lots of external GPU co-processors. Like, for instance, using another DCS-derived box, the PowerEdge C410x, which can cram a total of 16 GPUs into a 3U chassis. El Reg told you all about this mother of all graphics cards back in August, which has eight PCI-Express links coming out of it.
Dell's 96-core, 2U PowerEdge C6145
Now, with the combination of the twelve-core Opterons and the C6145 chassis, Dell can put eight sockets (with a total of 96 cores) in a 2U space and link out to sixteen GPUs; the C410x GPU chassis has fan-outs and PCI-Express switches that allow for one to four GPUs to be allocated it a server node, and you probably only need four of the six PCI-Express x16 slots to do a balanced configuration, leaving room for some storage links.
The C6145 can support the eight-core or twelve-core variants of the Opteron 6100s, and in theory any of the chips would work. But Dell is only supporting the new 2.5 GHz Opteron 6180 SE (twelve core), 2.3 GHz Opteron 6176 (twelve-core), and 2.4 GHz Opteron 6136 (eight-core) among the new chips coming out from AMD today. The exiting Opteron 6172, 6164, and 6132 can be used in the C6145 as well.
The machine uses low-voltage (1.35 volt) DDR3 main memory in capacities of 4 GB or 8 GB, and has 32 slots for a maximum of 128 GB or 256 GB per system board. If you want to use Samsung's Green DDR3 brand of low-voltage memory, it has been certified in this C6145 machine. When 16 GB DDR3 memory sticks are certified on the machine soon, it will be able to cram 1 TB into the 2U space.
The mobo used in the C6145 and R815 servers has an LSI 9260-8i RAID controller and an LSI 2008 6 Gb/sec SAS controller. It has an Intel Gigabit Ethernet port and optional Mellanox ConnectX-2 dual-port, 40 Gb/sec InfiniBand or Intel 82559 dual-port 10 Gigabit Ethernet adapters (which will eat some of those precious PCI slots.) The C6145 chassis can have 24 front-mounted 2.5-inch SAS or SATA disks or a dozen 3.5-inch SAS or SATA drives. There are also solid state drives available in 2.5-inch slots. You top out at 2.4 TB on the SSDs, 12 TB using 2.5-inch drives, and 48 TB using 3.5-inch drives that weigh in at 2 TB each.
The PowerEdge C6145 can run Novell's SUSE Linux Enterprise Server 11 SP1, Red Hat Enterprise Linux 5.5, and Microsoft's Windows Server 2008 R2 Enterprise x64 and HPC Server 2008 R2 x64 variants. VMware's ESXi 4.1 hypervisor has been certified to run on the cloudy box, as has Citrix Systems' XenServer 5.6 and Microsoft's Hyper-V R2.
To give a sense of how HPC shops might make use of the C6145, Dell cooked up an x64-only comparison using SPEC's floating point benchmark. On the SPECcpu_2006_fp_rate test, a single node in the C6145 using the Opteron 6180 SE processor and configured with two 146 GB disks and 128 GB of memory was able to deliver a rating of 654 peak on the SPEC floating point test. With two nodes running the workload, the C6145 was able to do 1,310. Dell doesn't have an eight-socket Xeon or Opteron box (the former has not been launched and the latter is not possible), but Hewlett-Packard's ProLiant DL980 G7 also tested on the SPEC number-crunching test has eight sockets and, using Intel's Xeon 7560 processors (which run at 2.26 GHz), it was able to achieve a SPEC peak rating of 1,080.
So the two Opteron boxes had about 21 per cent more floating point oomph than the eight-socket ProLiant DL9980, which cost around $131,000. The Dell PowerEdge C6145 chassis with two configured nodes cost $24,997. This is not precisely a fair comparison, of course, because two four-socket ProLiant DL580s using Xeon 7500s or ProLiant DL585s using the same Opterons would have been less expensive than the 980 box. But they are not anywhere as dense as this dell box, either, and that matters, too, for hyperscale and HPC shops.
The PowerEdge C6145 will be available on February 28. ®