IBM tweaks rack and blade servers
GPU blade still MIA, HPC clusters revved
IBM's certified software stack on these iDataPlex machines is a bit stale, however. Microsoft's Windows Server 2003 Enterprise Edition (with the Computer Cluster Service extensions for parallel HPC applications) is certified on these nodes, as is Novell's SUSE Linux Enterprise Server 10. Red Hat Enterprise Linux 4 and 5, and the VMware ESX Server/ESXi 3.5 and 4.0 hypervisors.
Microsoft's Windows HPC Server 2008 R2 is out and much-improved over this dusty old Microsoft code, and ditto for SLES 11 SP1, which has been out since the spring. RHEL 6 is right around the corner, in theory. If Big Blue wants to make sales with iDataPlex dx360 M3s, it had better get on the stick with the software stack.
IBM is also now supporting 16 GB memory sticks in the dx360 M3 servers, boosting main memory to a maximum of 256 GB. Support for the new GPUs and expanded memory will be available on December 17.
The BladeCenter HS22 and HS22V (a blade with more memory slots intended for server virtualization workloads) can now use 1.35 volt memory, which burns at a lower temperature than the 1.5 volt parts. IBM is shipping 2 GB and 4 GB memory sticks in the lower voltage, which IBM says consume 15 per cent less energy for a given unit of capacity.
Both of these blades have already been updated to use the six-core Xeon 5600s; the HS22 supports up to 192 GB of memory while the HS22V supports up to 288 GB using 16 GB memory sticks, which do not yet come in 1.35 volts. These low-volt 2 GB and 4 GB memory sticks for the HS22 and HS22V blades will be available on November 2.
The Brocade 10 Gigabit Ethernet Converged Switch for IBM's BladeCenter
IBM also announced a two-port 10 Gigabit Ethernet converged network adapter for its blade servers and matching 10 Gigabit Ethernet switch module, which supports the Fibre Channel over Ethernet (FCoE) protocol for converging server and storage traffic on the same switch, for the BladeCenter chassis.
The base switch module comes with 16 ports, but you can use a software key (for a fee, of course) to upgrade to the full 30 ports in the switch module. That gives you 14 10 GE ports for the blade server backplane plus eight 10 GE ports and eight more 8 Gb/sec Fibre channel ports in total. The switch runs Brocade's Data Center Manager fabric management software, which is integrated with IBM's Systems Director server management tools.
For the System x3250 M3 1U rack server, announced January, IBM is offering customers a bunch of new two-core processor options, including the Xeon L3406 running at 2.26 GHz (30 watts) and the Core i3-550 running at 3.2 GHz (73 watts). IBM is also tossing the four-core Xeon X3480 spinning at 3.06 GHz and burning at 95 watts into the rack server.
IBM already supported the two-core Celeron G1101 running at 2.26 GHz (73 watts), the Pentium G6950 running at 2.8 GHz (73 watts), the Core i3-530 running at 2.93 GHz (73 watts), and the Core i3-540 running at 3.06 GHz (73 watts) in the System x3250 M3.
Lastly, IBM is renaming its pre-configured HPC clusters Intelligent Clusters, and is rolling out new switch and server options for them. The quad-socket, Opteron 6100-based System x3755 M3, announced at the end of August can now be part of prefabbed Intelligent Clusters, as can the S60 Ethernet switch from Force10 Networks, the IS5100 and IS5300 quad data rate InfiniBand switches from Mellanox, the Grid Director 4200 InfiniBand switch from Voltaire, and the rebadged Ethernet switches from Juniper that IBM sells under its own Ethernet Switch J Series brand. ®