Original URL: https://www.theregister.com/2010/10/07/ibm_server_networking_update/

IBM tweaks rack and blade servers

GPU blade still MIA, HPC clusters revved

By Timothy Prickett Morgan

Posted in Channel, 7th October 2010 15:37 GMT

Today is IBM's big storage announcement day, as we report elsewhere, but the company also tweaked a bunch of servers and associated switching options.

The System x3250 M3 rack machine that is enhanced today, as well as the iDataPlex dx360 M3 rack-blade hybrid machine, were both announced back in the spring. But today they get memory and I/O enhancements and new CPU or GPU options, depending on the platform.

IBM's BladeCenter GPU expansion blade, which was previewed at the GPU Tech Conference two weeks ago, is not part of today's announcements. But it should have been.

The forthcoming GPU expansion blade houses a single Nvidia Tesla M2070 GPU co-processor, which has a custom heat sink and which plugs into a PCI-Express x16 slot on the expansion blade. The Xeon-based HS22 blade server and the expansion blades have expansion slots that allow the expansion blades and the HS22 CPU blade to be electronically linked, and according to IBM, up to four of the GPU expansion blades will be able to be snapped to a single HS22 blade for a five-wide HPC computing element.

The BladeCenter chassis is 14 blades wide, which means you can get three HS22 blades in the box, two of them with three GPU co-processors and one with four. (No, you can't hang one off the side of the chassis to make a balanced 15 blade configuration.) IBM has said that it will ship this blade in the fourth quarter, but there's no word on when it might be announced.

Speaking of GPUs, the iDataPlex dx360 M3 server has some tweaks related to GPUs. This blade-rack hybrid thingie was updated with Intel's six-core Xeon 5600s in May. That was also when IBM rolled out support in the 2U-high iDataPlex compute for Nvidia's Tesla M1060 (not unimpressive floating point oomph) and M2050 the fanless single-wide GPU co-processor.

The M2050 was also announced in May and sports the new Fermi GPUs and delivers 515 gigaflops of number-crunching power. The dx360 M3 could put two GPU-coprocessors and one compute element in a single 2U tray.

Today, IBM is allowing the M2070 fanless GPU co-processor into the dx360 M3 box. The M2070 is a double-wide card that is rated at the same 515 gigaflops, but has 6 GB of GDDR5 memory instead of the 3 GB on the M2050 card. IBM is also supporting a variant of this GPU called the M2070-Q, which has support for Quadro graphics drivers in the event that customers want to use the GPU co-processors as visualization engines instead of math engines.

With the M2050 and M2070 GPUs, the dx360 M3 machines can cram 49 teraflops of oomph into an iDataPlex rack. (The iDataPlex rack is not a normal 42U rack that is a little more than three feet deep, but is twice as wide and half as deep and allowing for IBM to be more efficient about cooling and therefore allowing it to cram more stuff into a data center than it can do with standard racks and servers.)

IBM's certified software stack on these iDataPlex machines is a bit stale, however. Microsoft's Windows Server 2003 Enterprise Edition (with the Computer Cluster Service extensions for parallel HPC applications) is certified on these nodes, as is Novell's SUSE Linux Enterprise Server 10. Red Hat Enterprise Linux 4 and 5, and the VMware ESX Server/ESXi 3.5 and 4.0 hypervisors.

Microsoft's Windows HPC Server 2008 R2 is out and much-improved over this dusty old Microsoft code, and ditto for SLES 11 SP1, which has been out since the spring. RHEL 6 is right around the corner, in theory. If Big Blue wants to make sales with iDataPlex dx360 M3s, it had better get on the stick with the software stack.

IBM is also now supporting 16 GB memory sticks in the dx360 M3 servers, boosting main memory to a maximum of 256 GB. Support for the new GPUs and expanded memory will be available on December 17.

The BladeCenter HS22 and HS22V (a blade with more memory slots intended for server virtualization workloads) can now use 1.35 volt memory, which burns at a lower temperature than the 1.5 volt parts. IBM is shipping 2 GB and 4 GB memory sticks in the lower voltage, which IBM says consume 15 per cent less energy for a given unit of capacity.

Both of these blades have already been updated to use the six-core Xeon 5600s; the HS22 supports up to 192 GB of memory while the HS22V supports up to 288 GB using 16 GB memory sticks, which do not yet come in 1.35 volts. These low-volt 2 GB and 4 GB memory sticks for the HS22 and HS22V blades will be available on November 2.

IBM Brocade Converged 10 GE Blade Module

The Brocade 10 Gigabit Ethernet Converged Switch for IBM's BladeCenter

IBM also announced a two-port 10 Gigabit Ethernet converged network adapter for its blade servers and matching 10 Gigabit Ethernet switch module, which supports the Fibre Channel over Ethernet (FCoE) protocol for converging server and storage traffic on the same switch, for the BladeCenter chassis.

The base switch module comes with 16 ports, but you can use a software key (for a fee, of course) to upgrade to the full 30 ports in the switch module. That gives you 14 10 GE ports for the blade server backplane plus eight 10 GE ports and eight more 8 Gb/sec Fibre channel ports in total. The switch runs Brocade's Data Center Manager fabric management software, which is integrated with IBM's Systems Director server management tools.

For the System x3250 M3 1U rack server, announced January, IBM is offering customers a bunch of new two-core processor options, including the Xeon L3406 running at 2.26 GHz (30 watts) and the Core i3-550 running at 3.2 GHz (73 watts). IBM is also tossing the four-core Xeon X3480 spinning at 3.06 GHz and burning at 95 watts into the rack server.

IBM already supported the two-core Celeron G1101 running at 2.26 GHz (73 watts), the Pentium G6950 running at 2.8 GHz (73 watts), the Core i3-530 running at 2.93 GHz (73 watts), and the Core i3-540 running at 3.06 GHz (73 watts) in the System x3250 M3.

Lastly, IBM is renaming its pre-configured HPC clusters Intelligent Clusters, and is rolling out new switch and server options for them. The quad-socket, Opteron 6100-based System x3755 M3, announced at the end of August can now be part of prefabbed Intelligent Clusters, as can the S60 Ethernet switch from Force10 Networks, the IS5100 and IS5300 quad data rate InfiniBand switches from Mellanox, the Grid Director 4200 InfiniBand switch from Voltaire, and the rebadged Ethernet switches from Juniper that IBM sells under its own Ethernet Switch J Series brand. ®