Cray shrinks XE6m supers down to a rack
So long CX1000 Xeon-InfiniBand baby clusters
Cray tried to sell Fords and Chevies when it launched the CX1000 entry supercomputer clusters launched back in March 2010. But to make its life easier, and to help bolster sales of its XE6 and XE6m supers that are based on Cray's own high-speed interconnect and software stack, the company has figured out how to shrink a Lexus down so it fits into the Ford and Chevy budget.
The CX1000 mixed Xeon-based blade servers and InfiniBand networking get a lower entry point than Cray could manage with its homegrown supers, which are based on Opteron processors from AMD and Cray's "Gemini" interconnect. The idea was for Cray to expand its revenues by slapping the Cray brand and technical backing behind InfiniBand clusters.
Cray's single-rack XE6m mini supercomputer
It is debatable how well or poorly this had worked, but it probably doesn't matter much now that Cray has now landed two of the largest supercomputer contracts in the world – the 10-petaflops "Blue Waters" super at the University of Illinois and the 20-petaflops "Titan" super at Oak Ridge National Laboratory.
Now there are going to be scads of researchers who will want to deploy applications on these machines, and rather than shelling out money for an InfiniBand cluster from Cray to design and tune their code. The advent of the mini-XE6m announced today by Cray means that researchers can afford to buy a slice of Titan or Blue Waters and work on the same technology in their offices that they will deploy to on a much grander scale later.
Barry Bolding, vice president of marketing at Cray, says that while the entry price for a configured half-rack of the CX1000 blade servers with quad data rate (40Gb/sec) InfiniBand links was around $100,000, the sweet spot for sales for these machines was somewhere on the order of $200,000 to $400,000 (in other words, from one to two racks). Each 7U chassis of CX1000 machines had 36 sockets across 18 blade servers and a 36-port InfiniBand switch, so you are talking about putting about 864 Xeon 5600 cores into four of these units inside of a single rack for around $200,000.
By contrast, last year a single rack of the XE6m machines could be loaded up with 24 eight-socket blades, and using the twelve-core Opteron 6100 processors, you could get around 10 teraflops of oomph across 2,304 cores for around $500,000, or $50,000 per teraflops. This was good if you happened to have $500,000 laying around. But plenty of scientists do not, but they still want to develop code for current and future Cray machines like Blue Waters and Titan, and they don't want to use a machine like the CX1000 - or to build their own for less money.
So Cray has gone back to the drawing board and come up with a baby XE6m configuration with six blades and 48 sockets using the new Opteron 6200s that can deliver about 6.5 teraflops across that "Gemini" XE interconnect at a cost of only $200,000, or about $30,769 per teraflops. Some of that price difference per teraflops comes from the jump from the Opteron 6100s to the 6200s, but some of it is Cray figuring out how to scale down its XE6m systems.
Cray also feels confident that it can go after the entry cluster market with its own Gemini interconnect instead of InfiniBand, thanks to the Cluster Compatibility Mode (CCM) and Cray's homegrown Linux Environment 3.0 software, to emulate Ethernet over top of Gemini. It also which allows binaries compiled for x86 clusters based on Ethernet to run unchanged on XE6m and XE6 machines.
The CCM Ethernet emulation launched in April 2010 and has been tweaked and tuned. You still need to compile the apps to run on a big box like Blue Waters and Titan, of course, but you can do app development in Ethernet compatibility mode. And Bolding says that Cray expects for many researchers to do just that.
Cray has stopped selling the CX1000s, by the way. ®
Sponsored: Are DLP and DTP still an issue?