Feeds

Cray and SGI push upgrades to latest supers

Tickle me, Elmo

Remote control for virtualized desktops

Supercomputer makers Cray and Silicon Graphics have done years of engineering to get their respective XE6 and Altix UV 1000 massively parallel supercomputers to market. And now, despite research funding woes among governments, research institutions, and corporations, the two companies face the challenging task of convincing customers of their prior machines to upgrade to the new iron.

Both companies are actually getting a little traction. One prime example is Cray's deal with Sweden's Kungliga Tekniska Högskolan, or Royal Institute of Technology.

In June, Cray announced that KTH, which had five clusters rated at an aggregate of around 160 teraflops (using a mix of Xeon, Itanium, and Power processors), had become a Cray customer for the first time. It committed to buying an XT6m midrange super based on the Opteron 6100-based blade servers and the old SeaStar2+ 2D torus interconnect to build a 93 teraflops super.

But now, KTH has rifled around in the couch cushions in the lounge and found some extra cash to go all the way and upgrade the new machine to a full-on XE6 system, complete with the new "Gemini" XE interconnect that debuted at the end of May.

The Gemini interconnect has around 100 times the message throughput of the SeaStar2+ interconnect. Both interconnects can be plugged into the Opteron 6100 blades made by Cray, but the new interconnect delivers about four times the peak theoretical scalability (around 3 million cores, using next year's "Interlagos" 16-core Opteron 6200s from AMD) of the SeaStar2+ interconnect. The SeaStar2+ interconnect is used in the 1.76 petaflops "Jaguar" super at Oak Ridge National Laboratory is panting heavily as it runs its 224,162 cores.

When KTH takes delivery of its XE6 system later this year, upgrading the XT6m machine, the upgraded box will weigh in at 300 teraflops. While this is a far cry from the 20 petaflops or so of peak performance that the XE6 can hit using the current twelve-core Opteron 6100 processors, KTH's new XE6 system will double the performance available to Swedish researchers and will be one of the most powerful HPC systems in Europe.

Over at SGI, which said this week when going over its fiscal 2010 year that it has shipped its Altix UV systems to fourteen customers after just beginning shipments in late May. The "UltraViolet" Altix UV 1000 machines are made from Intel's Xeon 7500 processors - from 128 two-socket blade servers, to be precise. Rather than being a massively parallel cluster like the Cray XE6 machines, the Altix UV 1000 systems implement a global shared memory over the NUMAlink 5 interconnect so all of the 2,048 cores in the nodes can see all of the 16 TB of memory (max) at the same time.

Technically speaking, the NUMAlink 5 implements an 8x8, paired node, 2D torus across those 128 blades using the NUMAlink 5 interconnect router.

You can build petaflops-scale machines from Altix UV system by creating lashing together 128 nodes using a fat tree configuration based on InfiniBand and then clustering 32 of these together using the NUMAlink 5 interconnect, for a total of 16,384 cores. This is not a shared memory system, obviously.

So far, no one has bought such a large Altix UV 1000 configuration, but the University of Minnesota - which is near Cray's stomping grounds and also where IBM also has a whole lot of HPC and systems expertise - has tapped SGI for an 1,152-core Altix UV 1000 with 3.1 TB of shared memory. The UofM is paying for the new super thanks to a National Institutes of Health grant. Thanks, Uncle Sam.

The box, nicknamed "Kouronis" after one of the 10,000 lakes in Minnesota, will be used for various life sciences work done by the department of chemistry, including multi-scale modeling, chemical dynamics, bioinformatics and computational biology, and biomedical imaging. The deal includes various virtualization workstations and back-end servers from SGI as well as the Altix UV 1000 system.

Beginner's guide to SSL certificates

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?