This article is more than 1 year old

Cray's midrange line big on Xeons, GPUs

Packing some Nehalem-EX punch

The CX1000-C blades support Microsoft's Windows HPC Server 2008 and Red Hat's Enterprise Linux 5 operating systems. Cray Cluster Manager variant of Platform Computing's LSF and Adaptive Software tools is tossed in as well.

The CX1000-G is a blade setup as well, but it marries Xeon 5600 blades with Nvidia's M1060 GPU co-processors to boost number crunching for certain kinds of workloads where GPUs make sense. The CX1000 chassis is essentially the same 7U chassis with the electronics and two fan blades at the top center of the chassis. But the machine has nine double-wide, half-height, two-socket blade servers based on the Xeon 5600s and including two of the M1060 GPUs on each blade.

The CX1000-G blades have only six DDR3 memory slots, so you have to use more expensive 8 GB modules to get up to 48 GB of memory per blade. The GPU blades have two ConnectX InfiniBand adapters to link out to the 36-port InfiniBand switch in the chassis, presumably double the pipes because there are four computing elements per blade (two CPUs and two GPUs) instead of two with the CX1000-C blades (two CPUs). The CX1000-G blades have room for one SATA or SSD drive, like their C series counterparts.

The last, and perhaps most interesting, of the new CX1000 midrange supers will be based on the Nehalem-EX Xeon 7500 processors, due from Intel on March 30. Cray is not at liberty to say much about these machines, but did offer some hints.

If the CX1000-C represents scale out supercomputing and the CX1000-G represents "scale through" computing (a new term as far as I know for using GPUs to augment CPUs), then the CX1000-S machines will deliver "scale up" HPC with a "fat memory node". The Xeon 5600 tops out at two-socket SMP, so that leaves the eight-core Xeon 7500s, their QuickPath Interconnect, and Intel's "Boxboro" chipset for its most recent Xeon MP and Itanium processors to make SMP nodes that will scale to 128 cores in a single system image. That would be a 16-socket box. As far as anyone knows, Intel is not offering such a chipset, but IBM and Bull have their respective eX5 and Fame 2G chipsets in the works.

Cray could have done its own chipset, of course, but it is equally likely that the company is licensing either the IBM or Bull chipsets. Considering Cray's intense competition with IBM (despite that Cray chief executive officer Peter Ungaro used to run IBM's supercomputer business), using IBM's eX5 chipset seems unlikely if possible. IBM has not said anything about its plans for Nehalem-EX machines beyond four sockets, but according to information obtained by The Register last summer, Bull's Fame 2G chipset (anchored by the Bull Coherent Switch) and related Mesca blade servers were designed to scale up to 16 sockets and offer up to eight DDR3 memory slots per socket.

The Mesca blade servers have four sockets and up to 256 GB per blade, and four of these are lashed together to make a 16-socket, 128-core, 1 TB fat node. InfiniBand switches could be used to link multiple nodes together if necessary, but it seems like the CX1000-S is aimed at providing a single fat node for local and departmental HPC work where having a big memory space to play in is more important than having lots of cores.

Cray could easily make CX1000-C and CX1000-G equivalents using AMD's future eight-core Opteron 4100 and imminent twelve-core Opteron 6100 processors (due on March 29). But making a fat node system is more problematic, since AMD's own chipsets for the Opteron 6100s are topping out at four sockets and 384 GB of main memory using 8 GB DIMMs. This is a reasonably fat node, to be sure. But it is not 1 TB.

The Cray CX1000-C and CS1000-G machines are available now, with entry configurations costing under $100,000. The feeds and speeds of entry configs were not available at press time. Cray has not said when it plans to put Fermi GPUs in the blades, which are the ones that customers really want because they have more oomph and error correction as well. ®

More about

TIP US OFF

Send us news


Other stories you might like