Big Blue trots out Xeon E5-2600 server lineup
One tower, two racks, one blade ... and this other thing
IBM announced  a passel of shiny new x86 iron this week, all built around Intel's two-socket Xeon E5-2600 chip, which formally debuted  on Tuesday – although select customers such as Big Blue have been fiddling with that server chip for months.
IBM generates a lot of its revenues and profits in hardware and software from mainframe and Power servers, but its x86-based System x rack and tower and BladeCenter blade server businesses can, in any given quarter, generate as much revenues as those mainframe and Power boxes individually.
IBM's System x boxes are often sold into the same accounts, as well, so Big Blue has to hustle and get various tower, rack, and blade servers out there when Intel does a chip announcement to keep the x86 competition out of its accounts – and with this week's announcements, it has.
The integrated PCI-Express 3.0 controllers on Intel's new "Jaketown" Xeon E5-2600 processor now allow server makers such as IBM to get the first PCI-Express 3.0 servers in their lines out the door. Some of IBM's Power7-based servers are still using PCI-Express 1.0 slots, and only last October others got PCI-Express 2.0 slots for the first time.
For many database clustering and supercomputing jobs where high-speed and high-bandwidth networking are key, the addition of PCI-Express 3.0 peripheral slots is as important as the addition of two more cores – up to eight in the Xeon E5-2600 line, a boost from the six in the Xeon 5600s – and the doubling of main memory capacity to 768GB.
This extra I/O bandwidth may or may not be important to customers using IBM's tower servers, which get tucked under desks and put in closets in remote office locations among its large enterprise customers as well as being sold into SMB shops by Big Blue's myriad channel partners. But the new System x3500 M4 tower server announced this week will no doubt be appealing to many SMB shops and enterprise customers because it has three times the disk capacity as the System x3500 M3 server it replaces in the IBM lineup.
Tower of power: System x3500 M4
The System x3500 M4 can support all sixteen models of the Xeon E5-2600s although not all of those parts are standard. At the moment, using registered DDR3 memory sticks, the main memory on this two-socket box tops out at 384GB using 16GB memory sticks in the box's dozen memory slots per socket. Later this year, IBM will support load-reduced LR-DIMM DDR3 main memory, which will allow for the three memory slots per channel on the Xeon E5 socket to be loaded up with 32GB sticks.
The machine has an optical drive bay, an empty 5.25-inch media bay, and four storage bays that can house a total of 32 2.5-inch drives. One larger bay that holds eight 3.5-inch disks or flash drives is also available, and the system has ServerRAID M5110 and M1115 disk controllers on the motherboard with RAID 0, 1, and 10 support – if you want to add RAID 5 to this controller, you have to pay for an optional daughter card. IBM is also offering a 6Gb/sec SAS controller, since the SAS controller on the Intel "Patsburg" C600 chipset tops out at 3Gb/sec speeds.
With 900GB SAS disks, you top out at 28.8TB with 2.5-inch drives. If you can get by with SATA, you can slide in 32 of the 1TB SATA disks and push it up to 32TB, and if you want 3.5-inch disks, you can put up to eight 3TB units into the chassis.
The x3500 M4 has eight PCI-Express 3.0 slots: five x8 and three x16, and you need to have the second processor in the system to make use of the second and third x16 slots.
IBM has chosen Intel silicon to put four Gigabit Ethernet ports on the mobo. There is a wide variety of Broadcom, Emulex, Intel, and QLogic networking adapters that support Gigabit and 10 Gigabit Ethernet speeds, as well as QLogic, Brocade, and Emulex Fibre Channel adapters for storage.
The System x3500 M4 can be tipped on its side and slipped into a 5U rack-mount chassis if you want a fat racker with lots of disk drives.
In its base configuration, the System x3500 M4 with a single four-core Xeon E5-2609 (2.4GHz), 4GB of memory, and no disk costs $1,665.
Rack 'em up
Like every other vendor in the server racket, IBM has two basic rack-based servers based on any two-socket Xeon server generation: a 1U machine – lovingly called a pizza box – with limited peripheral expansion, and a 2U machine with more room to add stuff.
IBM's System x3550 M4 "pizza box" 1U rack server
The pizza box in the new System x lineup is the x3550 M4, and as its name suggests, it is related to the tower box. As with the x3500 M4, all sixteen of the Xeon E5-2600 processors are supported in the x3550 M4, and the machine similarly has 24 memory slots across its two sockets, which top out at 384GB today using 16GB DDR RDIMM sticks and which will be boosted to 768GB max when 32GB LR-DIMM sticks come out later this year.
The machine has four Gigabit Ethernet ports on the mobo, and an integrated 6Gb/sec SAS controller on the board with the RAID 5 option. The system has room for two rows of 2.5-inch disks stacked horizontally in the front of the chassis, or three 3.5-inch drives side-by-side.
The x3550 M4 has two PCI Express 3.0 slots: one low-profile, full-length x16 slot and one full-height, half-length x8 slot. If you add the second CPU to the box, you can convert that x8 slot to an x16 slot or downgrade it severely to a PCI-X slot to plug older cards into the box.
The base System x3550 M4 server configured exactly like the x5500 M4 tower above (one E5-2609 processor, 4GB of memory, and no disk) costs $1,819.
The new System x3650 M4 rack server doubles the chassis height to a 2U form factor, giving you enough room to rotate those 2.5-inch drives by 90 degrees and line up 16 of them, side-by-side.
IBM's System x3650 M4 2U rack server
This 2U machine is the workhorse of the enterprise, often supporting database and application servers, while the 1U machines typically run infrastructure workloads such as Web serving. The full Xeon E5-2600 lineup is once again available in the System x3650 M4, and the same memory options and constraints of the x3550 apply.
The machine can support one disk bay with eight 2.5-inch drives, and you can add a second one to get to the full 16 drives; you can also get a single bay that supports up to six 3.5-inch drives. If you really like lots of SSDs, IBM has an option that will let you plug in 32 of its 1.8-inchers.
The System x3650 has three base PCI-Express 3.0 slots – all of them x8 – and you have to add the second processor and a second riser card to the machine to add three more slots. There are three flavors of risers: the first has three x8 slots, the second has one x16 slot and one x8 slot, and the third has one x16 slot and two PCI-X legacy slots. You have the same quad-port Intel LOM providing basic networking, with a slew of other networking and disk controllers available for the PCI slots.
The base x3650 M4 rack server configured like the two other machines above – one four-core E5-2609 processor, 4GB of memory, and no disks – costs $2,419.
On the blade front, there's only one new machine, the BladeCenter HS23. This two-socket blade is a single-width, full-height blade that can slide into the existing variations of the BladeCenter enclosures.
The HS23 blade doesn't have enough room for the full 24 memory slots, so this one is crimped back to 16 slots and therefore maxes out at 256GB using registered DDR3 main memory. The memory is laid out in an interesting fashion, as you can see below:
The HS23 blade has two hot-plug 2.5-inch drive bays and an integrated LSI SAS2004 disk controller. The CIOv slot is a special connector on the HS23 blade that implements a PCI-Express 3.0 slot that is actually extended to an adjacent blade, and the CFFh expansion slot extends a PCI-Express 3.0 x16 slot to outside of the blade into the chassis as well.
This CFFh I/O expansion slot was used to snap up to four expansion blades equipped with Nvidia Tesla GPUs together and to a prior-generation HS22 blade server  based on Xeon 5600 processors. The interesting thing about this CFFh interconnect is that if you use it to stack up one blade and four GPUs, the top CFFh port is still open to attach to other PCI peripheral slots.
The HS23 has a 10GE interposer card on the mobo to provide integrated (and virtualized) networking. There are a bunch of different virtual fabric adapters from Broadcom, IBM, and Emulex available for the HS23, as well as a mix of other Ethernet and InfiniBand cards from QLogic, Broadcom, Intel, Mellanox, and Brocade.
The BladeCenter HS23 Xeon E5-2600 server
A base BladeCenter HS23 server with a single four-core Xeon E5-2603 running at 1.8GHz and with 4GB of main memory and no disks costs $1,815.
The four servers outlined above will start shipping on March 16.
Each of IBM's new Xeon E5-2600 machines support Microsoft's Windows Server 2008 R2 in its many permutations. Red Hat's Enterprise Linux 5 and 6, and SUSE Linux' Enterprise Server 10 and 11 are certified to run on the System x3500 M4. VMware's ESXi 5.0 hypervisor is supported for server virtualization. The servers have an integrated USB flash port for storing the hypervisor, if you want to do it that way.
Neither a blade nor a rack – a platypus of sorts
The iDataPlex dx360 M4 server might officially start shipping on April 16, but there are some big customers in the supercomputing space that are buying these hybrid rack/blade machines to build HPC clusters.
Just this week, the US National Oceanic and Atmospheric Administration's National Weather Service  said it was moving from a cluster of Power 575 servers using Power6 processors to a new 149 teraflops iDataPlex platform using Intel's Xeon E5-2600 processors. Last November the US National Center for Atmospheric Research, which does longer-range climate modeling, tapped IBM to replace its own cluster of Power 575 machines with a much larger 1.6 petaflops Xeon E5-2600 cluster, called "Yellowstone" . The Forschungszentrum Juelich (FZJ) supercomputing Center in Germany is also building a 3 petaflops supercomputer called "SuperMUC"  based on the new iDataPlex nodes.
The iDataPlex rack setup is half as deep as a standard server rack, and comes in a cabinet with two columns of machines, side-by-side. As long as you can deal with the wider and more shallow racks, you can get twice as many servers on a square foot of floor as you can with standard rack machines.
The iDataPlex dx360 M4
The dx360 M4 can stack two two-socket compute nodes in a single enclosure, and can use any Xeon E5-2600 from the power-sipping 60 watter all the way up to the turbine-spinning 130 watter. According to the IBM spec sheets, the top-bin eight-core 2.9GHz 135 watt E5-2690 is not supported on the dx360 M4, and neither is the four-core Xeon E5-2643, which runs at 3.3GHz.
Each dx360 M4 node has two processors and a total of 16 memory slots, the same as the HS30 blade server. There are four memory channels per socket, but you can only use two DIMMs per channel. Also, you can only use unregistered or registered DDR3 sticks – no LR-DIMM.
The machine has two PCI-Express 3.0 slots on riser cards and a PCI-Express 3.0 x8 mezzanine card that can be used for either 10GE or InfiniBand networking. Each node in the two-node system can have two Gigabit Ethernet ports, and there's one 3.5-inch drive bay for local storage.
One interesting bit about the iDataPlex design is that the power supplies and disk slot are off to the left side and pulled out a little, with all of the peripheral and networking slots in the front of the machine, and the CPUs and memory at the back. Here's what it looks like:
The dx360 M4 server supports the latest releases of Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, and SUSE Linux Enterprise Server 10 and 11. The iDataPlex dx360 M4 2U chassis costs $455, and a compute node with no processors or disk installed, but 16GB of memory, costs $3,969. You can put two Nvidia Tesla GPU co-processors into each node of the dx360 M4.
Finally, IBM did one more thing, and this was on its existing BladeCenter HX5 blade servers, which launched two years ago  with the Xeon 7500 processors and which were updated last year with the Xeon E7s. These Intel chips are for larger SMPs or two-socket boxes that need fatter memory. Starting March 16, IBM will offer four-rank DDR3 memory sticks at 16GB capacities running at 1.35 volts, lower than the standard 1.5 volt memory.
Gigabyte for gigabyte, the low-voltage memory offers capacity for approximately 20 per cent less juice burned, and in machines with 256GB across two sockets, the power savings can add up. And now customers can have fatter memory sticks than last year, when 8GB was the upper limit for the HX5 machines using low-volt memory. The 1.35 volt memory is only supported with the Xeon E7 chips, which have support for it etched into their on-chip memory controllers. ®