Feeds

Cray notches another XE6-Cascades super deal

Dancing the Japanese two-step

Next gen security for virtualised datacentres

Cray's future "Cascade" family of supercomputers, which sport a new interconnect and means of linking into processors and coprocessors, are not even fully developed yet and the company has inked another deal for one of the boxes.

After Wall Street closed for business on Tuesday, Cray said that Kyoto University will install an Opteron-based XE6 parallel supercomputer rated at 300 teraflops to be fired up in 2012, and will then add a 400 teraflops Cascade system in 2014.

The XE6 supers, which shipped with AMD's 12-core Opteron 6100s in early 2010 are now shipping in limited quantities with the 16-core Opteron 6200 processors, announced two weeks ago at the SC11 supercomputer conference.

The XE6 supers employ the current "Gemini" XE interconnect to lash together Opteron processors in a 3D torus configuration that scales to multiple petaflops using Opteron processors, and to tens of petaflops using hybrid CPU-GPU setups such as the 20-petaflops "Titan" XE6-XK6 machine that will be built by Cray for Oak Ridge National Laboratory next year, or the 10-petaflops "Blue Waters" contract that Cray just landed for $188m at the National Center for Supercomputing Applications at the University of Illinois.

Cray has not said much about the Cascade interconnect, code-named "Aries", but what we do know is that it will have a much beefier version of the high radix router that's at the heart of the Gemini interconnect, and that it will plug into on-chip PCIe ports to link into processors instead of using HyperTransport ports on Opterons or QuickPath Interconnect ports on Xeons.

The Opteron processors do not yet have on-chip PCIe 3.0 controllers, so thus far the Cascade machines are being designed for use with a future generation of Intel Xeon processors – probably an "Ivy Bridge " generation Xeon chip, but Cray is not saying. The Cascade design means that Cray can use either Opteron or Xeon processors, and is not put into a position of supporting both HyperTransport and QuickPath interconnects.

The Cascade concept was funded by the Defense Advanced Research Projects Agency with an initial $43.1m grant in 2003, and was followed up with a $250m development contract, which DARPA has whittled back to $180m in recent years as its needs have changed. Cray doesn't book the Cascade funds as revenue, but rather as an offset against R&D for a future and as-yet-unannounced product. DARPA will get a Cascade machine as part of the deal, presumably.

The University of Stuttgart inked a two-step XE6-Cascade deal, the very first one, back in late October. The university's High Performance Computing Center Stuttgart (HLRS) paid more than $60m to get an XE6 super called "Hermit" that weighed in at 1 petaflops of peak number-crunching performance. The center will eventually upgrade to a Cascade system with somewhere between 4 and 5 petaflops of performance in the second half of 2013. HLRS was a big user of x64 clusters and NEC SX-6 vector machines.

That brings us to Kyoto University.

The Academic Center for Computing and Media Studies (ACCMS) at Kyoto U will be the first organization in Asia to get a Cascade machine, and it is also the first of the seven major technical universities in Japan to bring in Cray as a prime contractor for a massive supercomputer. (In September, Tsukuba University bought an 800 teraflops Xtreme-X super from Appro International based on Intel's impending "Sandy Bridge" Xeon E5 processors and Nvidia Tesla GPUs.)

The Cray win is a big deal, considering the politics of supercomputing and that Kyoto University currently has Fujitsu iron. This includes a fat-node cluster of Sparc Enterprise M9000 systems and an x86 skinny-node cluster of Opteron-based HX6000 servers.

The Sparc cluster is made up of seven 128-core M9000 servers, each with 1TB of memory and only weighs in at 9 teraflops of peak performance; the Opteron cluster has 416 server nodes with four four-core Opteron 8350 processors and the whole shebang is rated at 61.2 teraflops.

The Cray XE6 system will quintuple the university's processing capacity, and the move to the Cascade box will more than double it again, leaving 700 teraflops of oomph on the floor. (Kyoto is not upgrading the XE6 to the Cascade, but keeping both machines side-by-side.)

Pricing for the XE6 and Cascade machines acquired by Kyoto was not divulged – in fact, Cray said that the contracts were not finalized yet for the deals.

Which makes you wonder why Cray brought it up at all. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?