Feeds

Cray's midrange line big on Xeons, GPUs

Packing some Nehalem-EX punch

Protecting against web application threats using SSL

The CX1000-C blades support Microsoft's Windows HPC Server 2008 and Red Hat's Enterprise Linux 5 operating systems. Cray Cluster Manager variant of Platform Computing's LSF and Adaptive Software tools is tossed in as well.

The CX1000-G is a blade setup as well, but it marries Xeon 5600 blades with Nvidia's M1060 GPU co-processors to boost number crunching for certain kinds of workloads where GPUs make sense. The CX1000 chassis is essentially the same 7U chassis with the electronics and two fan blades at the top center of the chassis. But the machine has nine double-wide, half-height, two-socket blade servers based on the Xeon 5600s and including two of the M1060 GPUs on each blade.

The CX1000-G blades have only six DDR3 memory slots, so you have to use more expensive 8 GB modules to get up to 48 GB of memory per blade. The GPU blades have two ConnectX InfiniBand adapters to link out to the 36-port InfiniBand switch in the chassis, presumably double the pipes because there are four computing elements per blade (two CPUs and two GPUs) instead of two with the CX1000-C blades (two CPUs). The CX1000-G blades have room for one SATA or SSD drive, like their C series counterparts.

The last, and perhaps most interesting, of the new CX1000 midrange supers will be based on the Nehalem-EX Xeon 7500 processors, due from Intel on March 30. Cray is not at liberty to say much about these machines, but did offer some hints.

If the CX1000-C represents scale out supercomputing and the CX1000-G represents "scale through" computing (a new term as far as I know for using GPUs to augment CPUs), then the CX1000-S machines will deliver "scale up" HPC with a "fat memory node". The Xeon 5600 tops out at two-socket SMP, so that leaves the eight-core Xeon 7500s, their QuickPath Interconnect, and Intel's "Boxboro" chipset for its most recent Xeon MP and Itanium processors to make SMP nodes that will scale to 128 cores in a single system image. That would be a 16-socket box. As far as anyone knows, Intel is not offering such a chipset, but IBM and Bull have their respective eX5 and Fame 2G chipsets in the works.

Cray could have done its own chipset, of course, but it is equally likely that the company is licensing either the IBM or Bull chipsets. Considering Cray's intense competition with IBM (despite that Cray chief executive officer Peter Ungaro used to run IBM's supercomputer business), using IBM's eX5 chipset seems unlikely if possible. IBM has not said anything about its plans for Nehalem-EX machines beyond four sockets, but according to information obtained by The Register last summer, Bull's Fame 2G chipset (anchored by the Bull Coherent Switch) and related Mesca blade servers were designed to scale up to 16 sockets and offer up to eight DDR3 memory slots per socket.

The Mesca blade servers have four sockets and up to 256 GB per blade, and four of these are lashed together to make a 16-socket, 128-core, 1 TB fat node. InfiniBand switches could be used to link multiple nodes together if necessary, but it seems like the CX1000-S is aimed at providing a single fat node for local and departmental HPC work where having a big memory space to play in is more important than having lots of cores.

Cray could easily make CX1000-C and CX1000-G equivalents using AMD's future eight-core Opteron 4100 and imminent twelve-core Opteron 6100 processors (due on March 29). But making a fat node system is more problematic, since AMD's own chipsets for the Opteron 6100s are topping out at four sockets and 384 GB of main memory using 8 GB DIMMs. This is a reasonably fat node, to be sure. But it is not 1 TB.

The Cray CX1000-C and CS1000-G machines are available now, with entry configurations costing under $100,000. The feeds and speeds of entry configs were not available at press time. Cray has not said when it plans to put Fermi GPUs in the blades, which are the ones that customers really want because they have more oomph and error correction as well. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
'Kim Kardashian snaps naked selfies with a BLACKBERRY'. *Twitterati gasps*
More alleged private, nude celeb pics appear online
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
Seagate's triple-headed Cerberus could SAVE the DISK WORLD
... and possibly bring us even more HAMR time. Yay!
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.