Dell gets flexible with server
What Intel's 'Nehalem-EX' is really good for
Maybe what Intel really needs to do with its future Xeon processors is stop jacking up the core count so high and start putting more memory controllers on the chips and more memory channels in each socket. Maybe Intel should get back into the memory business.
When server maker Dell launched its two-socket and four-socket PowerEdge machines using the just-announced  "Nehalem-EX" Xeon 7500 processors, it spent more time talking about memory than it did processing. Just like IBM's System x and BladeCenter machines using the Xeon 7500s, Dell's new high-end PowerEdge boxes aim to give X64 server customers a lot more memory to play with than the two-socket boxes using the Xeon 5500 (from March 2009) and 5600 processors  (launched in mid-March), which top out at 18 DDR3 memory slots or 144 GB using 8 GB memory sticks. In a modern virtualized server world, that is just not enough memory.
So with the Xeon 7500s, server makers are all trying to cram as much memory as possible into their machines, and Dell is no exception with its PowerEdge line. Dell has two four-socket rack servers, a four-socket blade server, and a two-socket box with the memory slots of a four-socket machine for those who need to double up on memory per socket.
The PowerEdge M910 is the blade server, and unlike many of its siblings, it is a full-height blade that spans the 10U enclosure from bottom to top. The M910 is, however, a single-wide. Dell has not taken the modular blade approach like IBM has with its BladeCenter HX5 , which allows two two-socket blades to be snapped together to make a four-way and which also has two option Max5 memory expansion cards that allows a two-way to host 320 GB of memory and a four-way to have 640 GB using 8 GB DIMMs. The PowerEdge M910 from Dell is a more straightforward blade that has four sockets and 32 memory slots, for a maximum of 512 GB of capacity using 16 GB DIMMs.
Dell's is a more brute force approach, and without knowing what IBM is charging for the Max5 cards, it is hard to say which is cheaper. But the odds favor 8 GB memory sticks for a given capacity to be a lot cheaper than 16 GB sticks. And Brian Payne, senior manager for server product planning at Dell, says that IBM's memory scalability with the eX5 machines is "overkill" and "sacrifices density" and that the Dell approach uses traditional form factors without any snap-ons. The memory expansion features for rack servers in the IBM Nehalem-EX machines, as El Reg previously explained, are 1U boxes that plug into the eX5 chipset and stack on top of the rack servers.
The M910 blade is based on Intel's 7500 chipset, formerly known as "Boxboro," and it can support the full spectrum of eight Xeon 7500s (from the four core up to the eight core variants) and the three HPC-tweaked Xeon 6500 models, which are only supposed to be available for two socket boxes according to Intel but that is what the Dell spec sheet  says. If you use the Xeon 6500s, you can only put two in the box.
On the left, the PowerEdge M910. On the top right, the PowerEdge R810 and R815, and on the
bottom right, the PowerEdge R910.
The M910 has two Gigabit Ethernet ports with TCP/IP and iSCSI offload engines, a RAID controller, and room for two hot-swap 2.5-inch disks or solid state drives in either SATA or SAS flavors. The Dell Xeon 7500 blade has a variety of mezzanine cards for adding 10 Gigabit Ethernet, InfiniBand, and Fibre Channel links. On the software front, the M910 supports Microsoft's Windows Server 2008 (including the HPC variant), Red Hat's Enterprise Linux 5.5, Novell's SUSE Linux Enterprise Server 11, and Oracle's Solaris 10. Microsoft's Hyper-V and VMware's ESXi and ESX Server 4.0 hypervisors are certified on the M910 blade too.
A base M910 blade with two four-core 1.86 GHz E7520 processors, 64 GB of memory (using 4 GB sticks), two 73 GB SAS disks, a dual-port 10 GE mezzanine adapter, and no operating system will cost you back $11,236. Going full tilt boogie with four eight-core 2.26 GHz X7560 processors and 512 GB of memory will set you back $62,189, with the memory $45,902 of that. At just under $3,000 a pop, most IT shops are not going anywhere near 16 GB DIMMs.
Two-socket boxes, four-socket memory
The PowerEdge R810 is the entry Nehalem-EX rack box from Dell, and it comes in a 2U form factor, the workhorse for the server industry. Using a feature Dell calls the FlexMem Bridge, this R810 machine, which is technically a four-socket server, can have two of its processor sockets turned off and yet leave their main memory (a total of 16 slots) available to the two sockets that are turned on in the machine. So instead of having a two-socket box that tops out at 256 GB, this one can expand up to the same 512 GB maximum of the M910 blade and the two other true four-socket Nehalem-EX boxes announced by Dell, the R815 and R910.
The R810 can have any of the Nehalem-EX processors in the box, but again, if you use the Xeon 6500s, you can only run them in two sockets. The R810 has six PCI-Express 2.0 peripherals slots (five x8 and one x4) and an extra x4 slot dedicated for a base storage controller. The machine has room for six hot-swap 2.5-inch SAS or SATA drives, and it comes with two 1,100 watt power supplies. Windows and Linux are certified on this box, but Solaris 10 is not.
The default Dell configuration of the R810 comes with two six-core E7540 processors spinning at 2 GHz, 128 GB of memory (using 4 GB memory sticks and fully populating the slots in the box), and three 146 GB SAS disks. It costs $18,636 without an operating system. If you want to go to two sockets of the top-end, eight-core X7560, and then half-populate the box with 16 GB memory sticks for 256 GB total memory, then you're in for $34,591.
The last Dell box is the PowerEdge R910, which is a 4U box with four sockets and up to 64 memory slots, for a top-end 1 TB of main memory. The R910 also includes a failsafe embedded hypervisor, which is a fancy way of saying that the motherboard in the system has redundant embedded flash drives with RAID mirroring. This flash is used to run a server hypervisor, and in the event one of them fries, the server can reboot and gets is hypervisor from the second one. (It is amazing that these were not redundant to begin with, honestly).
The PowerEdge R910 only supports the true Xeon 7500s. Even with 64 memory slots, the 4U chassis can still house up to sixteen front-mounted 2.5-inch SAS or SATA drives. The machine has seven PCI-Express 2.0 slots (one x16, four x8, and two x4) and can be rejiggered to have ten (six x4s and four x8s) if you need it. The R910 can have four 1,100 watt power supplies or four 750 watters that are more energy efficient. Windows and Linux are supported on this box (provided they have the right updates for the Nehalem-EX processors), but again Solaris is not supported.
The default PowerEdge R910 configuration comes with four 2 GHz E7540 processors (each with six cores), 64 of the 2 GB DIMMs (come on, that's crazy), three 146 GB SAS disks, and no operating system. That's a cool $26,905. If you have an extra $92,741 laying around, you can jack the box up to 1 TB using 16 GB sticks, and for a total price of $125,429, that also gets you four of the eight-core X7560s.
Dell does not seem inclined to engineer bigger Nehalem-EX boxes, although that is technically possible, and in the case of eight-socket boxes, it can be done gluelessly and with very little work using the Boxboro chipset.
"We're staying focused on the sweet spot," says Payne. "While it is possible, the opportunity for bigger boxes is shrinking, as it has been for years. As we look at performance and scalability compared to 16-way RISC systems, we think there is a compelling case that can be made with what we have. A four-socket Nehalem-EX can compete."
Bootnote: This story originally said that the PowerEdge R15 was a Xeon 7500 box. It is actually an Opteron 6100 machine, which El Reg will cover separately. ®