Original URL: https://www.theregister.com/2010/09/01/cray_sgi_super_deals/

Cray and SGI push upgrades to latest supers

Tickle me, Elmo

By Timothy Prickett Morgan

Posted in HPC, 1st September 2010 18:55 GMT

Supercomputer makers Cray and Silicon Graphics have done years of engineering to get their respective XE6 and Altix UV 1000 massively parallel supercomputers to market. And now, despite research funding woes among governments, research institutions, and corporations, the two companies face the challenging task of convincing customers of their prior machines to upgrade to the new iron.

Both companies are actually getting a little traction. One prime example is Cray's deal with Sweden's Kungliga Tekniska Högskolan, or Royal Institute of Technology.

In June, Cray announced that KTH, which had five clusters rated at an aggregate of around 160 teraflops (using a mix of Xeon, Itanium, and Power processors), had become a Cray customer for the first time. It committed to buying an XT6m midrange super based on the Opteron 6100-based blade servers and the old SeaStar2+ 2D torus interconnect to build a 93 teraflops super.

But now, KTH has rifled around in the couch cushions in the lounge and found some extra cash to go all the way and upgrade the new machine to a full-on XE6 system, complete with the new "Gemini" XE interconnect that debuted at the end of May.

The Gemini interconnect has around 100 times the message throughput of the SeaStar2+ interconnect. Both interconnects can be plugged into the Opteron 6100 blades made by Cray, but the new interconnect delivers about four times the peak theoretical scalability (around 3 million cores, using next year's "Interlagos" 16-core Opteron 6200s from AMD) of the SeaStar2+ interconnect. The SeaStar2+ interconnect is used in the 1.76 petaflops "Jaguar" super at Oak Ridge National Laboratory is panting heavily as it runs its 224,162 cores.

When KTH takes delivery of its XE6 system later this year, upgrading the XT6m machine, the upgraded box will weigh in at 300 teraflops. While this is a far cry from the 20 petaflops or so of peak performance that the XE6 can hit using the current twelve-core Opteron 6100 processors, KTH's new XE6 system will double the performance available to Swedish researchers and will be one of the most powerful HPC systems in Europe.

Over at SGI, which said this week when going over its fiscal 2010 year that it has shipped its Altix UV systems to fourteen customers after just beginning shipments in late May. The "UltraViolet" Altix UV 1000 machines are made from Intel's Xeon 7500 processors - from 128 two-socket blade servers, to be precise. Rather than being a massively parallel cluster like the Cray XE6 machines, the Altix UV 1000 systems implement a global shared memory over the NUMAlink 5 interconnect so all of the 2,048 cores in the nodes can see all of the 16 TB of memory (max) at the same time.

Technically speaking, the NUMAlink 5 implements an 8x8, paired node, 2D torus across those 128 blades using the NUMAlink 5 interconnect router.

You can build petaflops-scale machines from Altix UV system by creating lashing together 128 nodes using a fat tree configuration based on InfiniBand and then clustering 32 of these together using the NUMAlink 5 interconnect, for a total of 16,384 cores. This is not a shared memory system, obviously.

So far, no one has bought such a large Altix UV 1000 configuration, but the University of Minnesota - which is near Cray's stomping grounds and also where IBM also has a whole lot of HPC and systems expertise - has tapped SGI for an 1,152-core Altix UV 1000 with 3.1 TB of shared memory. The UofM is paying for the new super thanks to a National Institutes of Health grant. Thanks, Uncle Sam.

The box, nicknamed "Kouronis" after one of the 10,000 lakes in Minnesota, will be used for various life sciences work done by the department of chemistry, including multi-scale modeling, chemical dynamics, bioinformatics and computational biology, and biomedical imaging. The deal includes various virtualization workstations and back-end servers from SGI as well as the Altix UV 1000 system.

Calhoun, Itasca, Blade, and Elmo

The Minnesota Supercomputing Institute is located at the Minneapolis campus of the UofM, where four other clusters run. They call them Calhoun, Itasca, Blade, and Elmo. The university has been playing the field with server makers. Itacsca is comprised of 1,091 ProLiant BL280c G6 blade servers from Hewlett-Packard, linked up with a 20 Gb/sec InfiniBand network.

It was rated at 74.4 teraflops on the latest Top 500 rankings. Calhoun is a more modest Altix XE 1300 cluster with 256 nodes, also using InfiniBand as the backbone of the cluster, while Blade is made up of 307 PowerPC-based BladeCenter LS21 servers using a much slower 10 Gb/sec InfiniBand interconnect. Elmo is a bit of a toy, and is comprised of six Sun Fire X4600 fat nodes linked by a Gigabit Ethernet network.

The university has another campus in Rochester, where IBM's AS/400 labs are located and where it builds and tests the BlueGene massively parallel Linux supers. Rochester is also home of the Mayo Clinic, and the UofM campus there not surprisingly has very tight ties with IBM, which donated a BlueGene/P super to that facility. It is not clear how much oomph this machine has, but it is not enough to make the Top 500 rankings, which suggests it is not a big box.

For SGI to get a big Altix UV 1000 win at the home of the Golden Gophers is making a deal in extremely hostile territory. (As was the case with HP getting the Itasca cluster deal.) It is a wonder, in fact, that the university is not in line to get a Power7 cluster along the lines of the petaflops-class "Blue Waters" machine that the University of Illinois - a rival in the Big Ten college football league - will be installing later this year. (You can read all about the guts of Blue Waters here.)

The real question about Cray and SGI is this: just how much upgrade money is out there for it to chase among its largest customers?

In the June 2010 Top 500 rankings, Cray had 21 systems with a total of 4.78 petaflops. Not that this is likely, but let's have some fun. Let's say all of these customers decide to upgrade their systems, doubling capacity and moving to XE6 systems. The numbers are a little tough to figure, but it looks like Cray is peddling a petaflops of oomph on a Baker/Gemini box for about $45m. So just selling a new box to everyone in the Top 500 could net somewhere around $430m. There are lots of smaller XT6m and XE6m customers to peddle stuff to, for sure, but this might only be another couple of hundred million.

What about SGI? The company had only 17 machines on the current Top 500 list, for a total of 2.15 petaflops of oomph. However, only three of these boxes are based on NUMAlink interconnects and these boxes have only 175.5 teraflops of oomph all told. Selling upgrades to these customers, while a good idea, is not going to generate much dough. But new customers sure can. The box going into the University of Minnesota, which is effectively a greenfield installation for a shared memory super, and is rated at around 42 teraflops by El Reg's estimation. SGI needs to make a lot more of these deals to make money. Fourteen down, many dozens to go. It wouldn't hurt SGI's numbers any if one of the big US nuke labs ponied up some cash to push the Altix UV to its limits for a cool $100m or so. ®