Feeds

Rutherford Appleton Lab fires up ceepie-geepie hybrid

Southampton gets Iridis 3 cluster upgrade

Next gen security for virtualised datacentres

A consortium of universities in Oxfordshire and the Rutherford Appleton Laboratory has finished building two new supercomputer clusters for academic and corporate researchers to let their codes loose upon.

Perhaps the most interesting of the new machines is the one called "Emerald", paying homage to Nvidia and its green theme, and as it turns out the most powerful hybrid CPU-GPU supercomputer in the United Kingdom at the moment. (Well, of the ones that are publicly knowable.)

Emerald is based on HP ProLiant servers, and Nvidia says it has 84 server nodes that are equipped with 372 Tesla M2090 GPU coprocessors. That ratio suggests that RAL has opted for HP's SL6500 scalable chassis, and indeed, if you look at the Top 500 entry for the Emerald system, you'll see that it is based on the SL390s G7 server nodes that came out in April 2011. The 2U version of the SL390s G7 server trays can hold four GPU coprocessors (there is a 4U version that can hold a stunning eight coprocessors) for each two-socket Xeon 5600 node.

Rutherford Appleton Lab's Emerald super

Rutherford Appleton Lab's "Emerald" super

In this case, RAL has opted for six-core Xeon 5649 processors spinning at 2.53GHz, and is using QDR InfiniBand to link the nodes. The Tesla M2090s link to the server nodes by PCI-Express 2.0 bus links.

This machine has a total of 6,960 cores (1,008 of them are x86 cores, the rest are on the Teslas) and has a peak theoretical 257.6 teraflops of number-crunching power. On the Linpack test, the Emerald machine was rated at 114.4 teraflops and ranked number 159 on the list.

The RAL will be hosting the Emerald machine, and is being designated as a CUDA center of excellence by Nvidia as well – which means it gets support on translating codes to run on GPU coprocessors and then uses its expertise to help others make the jump to GPUs.

According to a statement from the Engineering and Physical Sciences Research Council, the launch of the Emerald machine is the official debut of the e-Infrastructure South Consortium, which includes four UK universities – Bristol, Oxford, University College London, and Southampton – in collaboration with the Department of Scientific Computing at RAL, which together have formed the e-infrastructure South Centre for Innovation that will own and operate Emerald and a second machine – actually an upgrade to an existing one – called Iridis 3.

Built by IBM from its iDataPlex rackish-bladish hybrids, Iridis 3 was first installed at Southampton back in November 2009 with 8,000 cores spread across 1,000 nodes. This box used the "Nehalem-EP" Xeon 5500 processors, and could deliver 72.3 teraflops peak and 66.7 teraflops sustained on the Linpack Fortran benchmark test.

Interestingly, Iridis was one of the largest Windows HPC Server 2008 clusters in the world when it was built, using an InfiniBand network to lash the nodes together.

Southampton Iridis super

The University of Southampton's Iridis 3 supercomputer

IBM has just upgraded this Iridis 3 box with six-core "Westmere-EP" processors, cutting the number of nodes back to 924 and boosting the core count to 11,088. And now the machine – surely it should be called Iridis 3.5 – is running Linux, and has 106.4 teraflops peak and 94.7 teraflops sustained performance on Linpack. That's a tidy 42 per cent performance boost through a CPU upgrade.

Southampton didn't say what it did with the other 76 nodes, but that's the IT department for you. Since they were running Windows, they are probably playing Crysis.

The two supercomputers in Oxon were paid for through a £3.7m grant from the UK Engineering and Physical Sciences Research Council, which has £145m from the guv'ment to upgrade various strategic IT infrastructure around the country.

In addition to the four universities and the RAL, a number of commercial entities are already signed up to make use of the supers, including the Numerical Algorithms Group, Schlumberger Abingdon, and InhibOx. The machines will be used in a number of research areas, including astrophysics, bioinformatics, climate-change modeling, simulating 3G and 4G networks, medical imaging, and flu tracking. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Object storage bods Exablox: RAID is dead, baby. RAID is dead
Bring your own disks to its object appliances
Nimble's latest mutants GORGE themselves on unlucky forerunners
Crossing Sandy Bridges without stopping for breath
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
BYOD's dark side: Data protection
An endpoint data protection solution that adds value to the user and the organization so it can protect itself from data loss as well as leverage corporate data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?