This article is more than 1 year old

Dell kicks out new blades and racks

PowerEdge snuggles with fast GPUs and fat SSDs

A resurgent Dell, riding the x64 upgrade wave that is happening in the wake of the recession, will crank out three more machines today, broadening its PowerEdge lineup to chase some more money. The new machines include two PowerEdge blade servers and a new rack machine. All will start shipping in July.

The PowerEdge M610x blade server is probably the most interesting of the lot. This is a full height, two-socket blade based on Intel's quad-core Xeon 5500 and six-core Xeon 5600 processors. That's not the interesting part. The M610x blade has two full-length, full height PCI-Express 2.0 x16 slots - which is a new thing for blade servers, but something others will no doubt deliver soon - and that means you can plug all kinds of neat things into it. Like Nvidia's Tesla 20 GPU co-processor cards or Fusion-io's ioDrive Duo flash disk to accelerate calculations or I/O for specific workloads. (Yes, each blade has its own dedicated GPU or flash drives. So much for that whole "virtual I/O" thing that blades were supposed to do.)

The PCI-Express x16 slots on the M610x blade have extra power connectors that suck juice off the PowerEdge M1000e chassis, and can support one two-slot device sipping up to 300 watts or two single-slot devices that chug 500 watts (250 watts each). The current C2050 Tesla 20 GPU co-processor is rated at 515 gigaflops at double-precision math (1.03 teraflops single precision) and burn 247 watts going full out.

So in theory, you can cram a teraflops of number-crunching power in the blade, and that means you can put eight two-socket Xeon blades in the box and 8.24 teraflops of GPU power (at double precision) in the 10U chassis. So that means you can get 32.96 teraflops in a rack and still have 2U left over to play with. So a mere 31 racks and you are breaking the petaflops barrier. Provided your code works well on Tesla GPUs, and so many of them operating in parallel, of course.

The PowerEdge M610x is based on Intel's 5520 chipset and supports the 60 watt, 80 watt and 95 watt versions of the Xeon 5500 and Xeon 5600 processors. The blade has a dozen memory slots, and supports memory sticks in 1 GB, 2 GB, 4 GB, 8 GB and 16 GB capacities. The M610x has three different I/O mezzanine card slots that allow Gigabit, 10 Gigabit, Fibre Channel and InfiniBand ports to be snapped into the blade. The M610x also has room for two hot swap disk drives, which can be 2.5-inch SAS drives spinning at either 10K or 15K RPM or 2.5-inch SATA drives whirring at 7200 RPM. Dell is also peddling a SATA solid state disk with 100 GB of capacity of you don't want to use the Fusion-io Duo SSD.

Fusion-io has a special-bid SSD that puts eight ioDrive units on a double-wide x16 card, called the Octal, that delivers five terabytes of capacity and 800,000 IOPs with a 6GB/sec bandwidth. That would be a sweet I/O subsystem for a blade. It's a pity the M610x doesn't have three or four PCI-Express slots. You could do lots of disk I/O and math all in the same blade, perhaps even creating a distributed Lustre file system and assigning processing to blade nodes where the data is already resident instead of moving data to where a node is requesting it for processing.

The PowerEdge M610x supports Microsoft's Windows Server 2008 and its R2 update as well as Red Hat's Enterprise Linux 5, Novell SUSE Linux Enterprise Server 11 and Oracle Solaris 10. Microsoft's Hyper-V, Citrix Systems' XenServer, and VMware's ESXi hypervisors are also certified on the blade (in both embedded versions running on baby flash sticks or full versions running on disks). A base M610x configuration will cost $2,269.

The PowerEdge M710HD is a high-memory blade designed to support virtualized server workloads. Dell wants to beef up the memory on its blade servers because paradoxically the virtualization attach rate is twice as high on PowerEdge blade servers as it is on PowerEdge rack servers, according to Brian Payne, director of server product management at Dell. Blades tend to skimp a little bit on memory compared to racks because of heating and space issues, but Dell's customers seem to like to virtualize on blades, and that means they want fatter (in terms of memory capacity) blade servers.

More about

TIP US OFF

Send us news


Other stories you might like