Original URL: http://www.theregister.co.uk/2010/06/09/dell_poweredge_server_update/

Dell kicks out new blades and racks

PowerEdge snuggles with fast GPUs and fat SSDs

By Timothy Prickett Morgan

Posted in Servers, 9th June 2010 16:02 GMT

A resurgent Dell, riding the x64 upgrade wave that is happening in the wake of the recession, will crank out three more machines today, broadening its PowerEdge lineup to chase some more money. The new machines include two PowerEdge blade servers and a new rack machine. All will start shipping in July.

The PowerEdge M610x blade server is probably the most interesting of the lot. This is a full height, two-socket blade based on Intel's quad-core Xeon 5500 and six-core Xeon 5600 processors. That's not the interesting part. The M610x blade has two full-length, full height PCI-Express 2.0 x16 slots - which is a new thing for blade servers, but something others will no doubt deliver soon - and that means you can plug all kinds of neat things into it. Like Nvidia's Tesla 20 GPU co-processor cards or Fusion-io's ioDrive Duo flash disk to accelerate calculations or I/O for specific workloads. (Yes, each blade has its own dedicated GPU or flash drives. So much for that whole "virtual I/O" thing that blades were supposed to do.)

The PCI-Express x16 slots on the M610x blade have extra power connectors that suck juice off the PowerEdge M1000e chassis, and can support one two-slot device sipping up to 300 watts or two single-slot devices that chug 500 watts (250 watts each). The current C2050 Tesla 20 GPU co-processor is rated at 515 gigaflops at double-precision math (1.03 teraflops single precision) and burn 247 watts going full out.

So in theory, you can cram a teraflops of number-crunching power in the blade, and that means you can put eight two-socket Xeon blades in the box and 8.24 teraflops of GPU power (at double precision) in the 10U chassis. So that means you can get 32.96 teraflops in a rack and still have 2U left over to play with. So a mere 31 racks and you are breaking the petaflops barrier. Provided your code works well on Tesla GPUs, and so many of them operating in parallel, of course.

The PowerEdge M610x is based on Intel's 5520 chipset and supports the 60 watt, 80 watt and 95 watt versions of the Xeon 5500 and Xeon 5600 processors. The blade has a dozen memory slots, and supports memory sticks in 1 GB, 2 GB, 4 GB, 8 GB and 16 GB capacities. The M610x has three different I/O mezzanine card slots that allow Gigabit, 10 Gigabit, Fibre Channel and InfiniBand ports to be snapped into the blade. The M610x also has room for two hot swap disk drives, which can be 2.5-inch SAS drives spinning at either 10K or 15K RPM or 2.5-inch SATA drives whirring at 7200 RPM. Dell is also peddling a SATA solid state disk with 100 GB of capacity of you don't want to use the Fusion-io Duo SSD.

Fusion-io has a special-bid SSD that puts eight ioDrive units on a double-wide x16 card, called the Octal, that delivers five terabytes of capacity and 800,000 IOPs with a 6GB/sec bandwidth. That would be a sweet I/O subsystem for a blade. It's a pity the M610x doesn't have three or four PCI-Express slots. You could do lots of disk I/O and math all in the same blade, perhaps even creating a distributed Lustre file system and assigning processing to blade nodes where the data is already resident instead of moving data to where a node is requesting it for processing.

The PowerEdge M610x supports Microsoft's Windows Server 2008 and its R2 update as well as Red Hat's Enterprise Linux 5, Novell SUSE Linux Enterprise Server 11 and Oracle Solaris 10. Microsoft's Hyper-V, Citrix Systems' XenServer, and VMware's ESXi hypervisors are also certified on the blade (in both embedded versions running on baby flash sticks or full versions running on disks). A base M610x configuration will cost $2,269.

The PowerEdge M710HD is a high-memory blade designed to support virtualized server workloads. Dell wants to beef up the memory on its blade servers because paradoxically the virtualization attach rate is twice as high on PowerEdge blade servers as it is on PowerEdge rack servers, according to Brian Payne, director of server product management at Dell. Blades tend to skimp a little bit on memory compared to racks because of heating and space issues, but Dell's customers seem to like to virtualize on blades, and that means they want fatter (in terms of memory capacity) blade servers.

Servers supporting virtual machines tend to run out of memory capacity long before they run out of CPU capacity, which is why the market for eight-socket servers is diminishing. With four-socket boxes based on Opteron 6100 and Xeon 7500s offering 512GB and 1TB of capacity, there's no need to go to eight sockets for most workloads. Ditto for two-socket machines that have the full complement of 18 memory slots on six-core Xeon 5600 processors.

The M710HD blade uses the Intel 5520 chipset and supports both the Xeon 5500 and 5600 processors, just like the M610x above and a bunch of other Dell blades, racks, and towers. It has 18 memory slots, and using 8GB memory sticks, you can put 144GB on the blade, or 12GB per core. If you switch to 16GB sticks, you can only put a dozen in the machine, for a maximum of 192GB, or 16GB per core.

This is a limit in the integrated memory controller in the Nehalem-EP and Westmere-EP Xeon chips for two-socket servers, not the motherboard. Cisco Systems gets around this limit with its Unified Computing System blades with its own intermediary memory controller ASIC, which tricks the Xeon chips into thinking they are addressing a smaller amount of memory than they are.

The M710HD blade has room for two 2.5-inch SAS drives, which come in 10K and 15K RPM flavors with capacities ranging from 36GB to 300GB. Dell has a range of SSDs for this blade, which range in size from 25GB to 150GB. The blade also has three mezzanine I/O slots, for 10 Gigabit Ethernet, InfiniBand, and Fibre Channel links.

The M710HD is the first Dell blade with what the company is calling the network daughter card, or NDC, which is a configurable base network port for the blade. (Rather than hard wiring it on the blade.) The initial NDC card will support Gigabit Ethernet ports, but in future other converged network adapters will be available in NDC versions. 10 Gigabit Ethernet is the obvious second NDC to add, with InfiniBand third, if at all. The M710HD also has dual, mirrored flash sticks for hosting the embedded Hyper-V, XenServer, and ESXi hypervisors. The M710HD supports Windows, Linux and Solaris, and a base configuration will set you back $2,474.

Finally, Dell will today put a two-socket, rack-based box into the field using Advanced Micro Devices' twelve-core Opteron 6100 processors, called the PowerEdge R715 and the younger brother to the four-socket PowerEdge R815 announced in late March alongside Dell's machines using Intel's high-end "Nehalem-EX" Xeon 7500 processors. The feeds and speeds of the PowerEdge R815 were not available when we covered the Xeon 7500 boxes back in early April, so we'll do both now.

The PowerEdge R715 uses AMD's homegrown chipsets for the Opteron 6100s, the SR56X0 I/O hub and the SP5100 southbridge, which can support two or four sockets. The 2U rack server has 16 memory slots, which means it can host up to 256GB using 16GB memory modules.

The R715 has six PCI-Express 2.0 slots (five x8, one x4 slot, and one x4 storage slot with an x8 connector) and room for six 2.5-inch SAS, SATA, or SSD drives.

The R715 obviously can support more cores and more memory than a standard Xeon 5600, but it is unclear if this yields an actual performance boost over a two-socket "Westmere-EP" server. (We aim to find out soon, once some benchmarks are out.)

The R715 supports Windows, Linux, and Solaris, just like the Xeon machines above, but only XenServer and ESX Server are certified on this Opteron box on the hypervisor front. In a base configuration, the R715 will cost $3,199.

The R815 packs four sockets, or 48 cores, into the same 2U of rack space. Only 256GB of memory is supported on this machine at this time because its 32 memory slots top out at 8GB capacities. (Yes, it is silly that 16 GB sticks are not on this box if they are on the R715.) The machine has the same I/O options as the R715 (same slots and disk bays), and in a base configuration with two Opteron 6168 processors running at 1.9 GHz, 64GB of memory, three 146GB disks and no operating system, it costs $12,133. This machine has been available since last month, which is why we can tell you the precise configuration associated with the base price. ®