This article is more than 1 year old

Dell flashes enhanced 12G racks of PowerEdge gear

Plus tower and cloud packages unveiled

A rack with fatter storage

There's a variant of this machine called the PowerEdge R720xd that packs substantially more storage wallop. It has the same two Xeon E5 engines and 768GB of memory capacity as the other machines, but this 2U rack server can hold twenty-four 2.5-inch drives or twelve 3.5-inch drives, plus another two 2.5-inch drives that slide into the back of the chassis. This system has only six PCI-Express slots (two x16 and four x8 slots) and sports the Broadcom and Intel daughter cards for networking options and the same power supply options as the R620.

Dell PowerEdge M620

A new blade server sharpened on a Sandy Bridge

The PowerEdge M620 is a half-height blade server that slides into the existing M1000e blade chassis, which is a 10U enclosure that holds either sixteen half-height blades or eight full-height blades. Interestingly, the M620 blade sports the same two Xeon E5 processors and the same 768GB maximum memory as the tower and rack machines. This M620 machine sports a slew of I/O mezzanine card options based on silicon from Intel, Brocade Communications, Broadcom, Emulex, QLogic, and Mellanox Technologies, including Gigabit and 10 Gigabit Ethernet, Quad Data Rate (40Gb/s) and Fourteen Data Rate (56Gb/s) InfiniBand, and 8Gb/s Fibre Channel links.

The M620 has two hot-plug, 2.5-inch drive bays, and can have SSD, SATA disk, or SAS disk drives slid into those two slots. The PERC S110 controller with software raid and the PERC H310, H710, and H710P controllers for internal RAID arrays can snap onto this M620 blade. The M620 has two SD cards for redundant embedded hypervisors to sit on.

These five PowerEdge machines outlined above are the general purpose machines aimed at most enterprise customers. They support Microsoft's Windows Server 2008 R2 SP1 and its Hyper-V v2 hypervisor as well as Windows Small Business Server 2011. On the Linux front, SUSE Linux Enterprise Server and Red Hat Enterprise Linux are certified on the boxes (presumably SLES 11 SP1 and RHEL 5.6 and 6.1, but Dell doesn't say). If you don't want Hyper-V as your hypervisor, then you can use XenServer from Citrix Systems or ESXi from VMware (the exact versions and releases were not available at press time).

Generally speaking, Payne tells El Reg that the PowerEdge R620 is being pitched at server virtualization workloads: it can pack more than three times the number of virtual machines in a rack compared to the prior PowerEdge 11G machines through a combination of the increase in cores, in main memory and in networking bandwidth in the 12G boxes.

The M620 is similarly dense enough to be pitched at heavy virtualization or HPC workloads, and the R720xd is being pegged as the idea box for "collaboration", by which Dell means servers running workloads that need more storage than the typical two-socket box.

While there is no official certification for use in libraries, Payne says that the new T620 tower has "unprecedented acoustics" and is quiet enough to be used in a library. (Yes, people still go to libraries and we still fund them, thank heavens.) What I can tell you from personal experience is that a two-socket tower server of the Xeon 5400 generation, with a mere six drives, was enough for me to outsource my servers instead of having it make me deaf in my small office.

But wait, there are two more boxes

A few years back, Dell started commercializing some of the custom server designs that it had created for hyperscale data center operators such as Facebook through its Data Center Solutions (DCS) bespoke server unit. A few of these were made available on a special-bid basis as the PowerEdge-C line. With the Xeon E5 launch, there is one new official model of the PowerEdge-Cs and probably a bunch of internal ones that only DCS customers get to see.

Dell PowerEdge C6220

Dell's Xeon E5 cloud box, the PowerEdge C6220

The C6220 is a 2U chassis that has up to four server trays in it, each one with its own two-socket Xeon E5 machine. Each node in the chassis has 16 DDR3 memory slots, holding a maximum of 512GB using 32GB memory sticks when they are supported. Right now, the machine tops out at 256GB using 16GB sticks. The chassis can have a dozen 3.5-inch drives or two dozen 2.5-inch drives mounted on the front, with the drives allocated as you see fit to the four nodes in the box.

The server nodes have mezzanine cards for Ethernet or InfiniBand networking and SAS host bus adapters. You can also use the x16 slots on the nodes to reach out to the C410x "mother of all graphics cards" PCI-Express expansion chassis that DCS started selling back in August 2010. Each C6220 node has two Gigabit Ethernet ports and can sport an LSI 2008 SAS controller or an LSI 9265-8i RAID controller.

The nodes are certified to support SLES 11 SP1, RHEL 6.0, Windows Server 2008 R2 SP1, including the HPC variant and the Hyper-V hypervisor. XenServer 5.6 and ESXi 5.0 are also supported on the nodes.

The chassis comes in two-node and four-node versions, and in the two-node version you have more I/O slots: one x8 mezz slot and two x16 PCI slots. The four-node version just has one x8 mezz slot and one x16 PCI slot. The nodes do not have the full-on iDRAC service processor, but rather just an IMPI 2.0-compliant baseboard management controller and support for Intel's Node Manager cluster management tool.

In addition to these six machines, Dell is also talking a bit about one other machine: the PowerEdge R820 rack server, which is being positioned as "the right server for database".

Sally Stevens, vice president of PowerEdge server marketing at Dell, would not confirm any of the feeds and speeds of this machine, except to say that it was based on a future Xeon processor capable of supporting four sockets in a single system image and that early adopters were playing around with an early version of the R820 right now.

It stands to reason that the PowerEdge R820 will be based on either the rumored four-socket Xeon E5 variant in the Sandy Bridge family or the "Ivy Bridge-EX" high-end kicker to the current Xeon E7. Intel has not said much about four-socket Xeon E5 variants, but they were on the motherboard roadmap that the company was showing off at the SC11 supercomputing conference last November.

Intel has said even less about whatever comes after the current ten-core Xeon E7 chips for machines with two, four, and eight sockets and much larger memory footprints than the two-socket Xeon 5600s could provide.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like