Feeds

Jaguar to Titan? Not so bad…

Keptacular Metamorphis

Beginner's guide to SSL certificates

At SC11 I had the opportunity to talk to some of the people responsible for the biggest computer upgrade known to man. Oak Ridge National Labs is upgrading its current Cray XT5 ‘Jaguar’ system to a Cray XT6 system that will be known as ‘Titan’.

It’s quite a facelift. Today, Jaguar is a 1.75 PFlop supercomputer with more than 18,000 nodes containing 224,162 cores of AMD 6-core Istanbul processors. In 2009, it was the first system to provide greater than a petaflop sustained performance, taking the number one slot on the Top500 list.

Two years later it’s not exactly a performance dog, but it’s been knocked down to number three on the list, supplanted by the Fujitsu 10.5 PFlop K Computer and China’s NUDT 2.56 PFlop Tianhe-1A.

The transition from Jaguar to Titan will be profound, with a performance boost to somewhere around 20 PFlops – which should put it somewhere near the top, it not the pinnacle, of the Top500. The biggest factor in the upgrade will be the move from a traditional CPU-based architecture to a hybrid CPU+GPU design.

In final form, which will be achieved next year, each of the 18,000+ Titan nodes will have one 16-core AMD Interlagos processor and a NVIDIA Kepler GPU accelerator. Titan will have many more CPU cores than Jaguar and the additional power provided by adding 18,000 Kepler GPUs in the mix. This will make Titan the largest hybrid supercomputer in the world – not just “GPU-riffic” but “Keptacular” as well. “Keptastic,” perhaps?

The biggest hurdle here isn’t the hardware; it’s the software, right? How the hell do you CUDA-ize the hundreds of applications and millions of lines of code that are running on Jaguar and will need to run on Titan? Not surprisingly, Cray and pals NVIDIA, PGI, and CAPS have been pondering this one. They’ve come up with OpenACC, and are presenting it as a parallel programming standard.

What OpenACC does is allow programmers to insert ‘directives’ into their code that will alert the compiler to routines that should be parallelized – sent to multiple cores or to accelerators. The compiler does the work, and the programmer doesn’t have to change any of the underlying code (other than adding directives, that is, and there are tools that help them do this too.)

I don’t pretend to know any of the ins and outs of writing parallel applications (well, I do pretend to know it if I’m certain that I’m talking to people who are dumber than I am), but a presentation from Cray’s John Levesque gave me some idea of how well OpenACC works.

One of his examples was the relative performance of their CAM-SE when using different methods to gain CUDA-ization. On the current system, the CAM-SE REMAP function took 65.30 minutes to complete. Or was it seconds? (He went damned fast in the presentation, and I was in the back, but we’re talking relative performance.)

After a rewrite in anticipation for porting to an accelerator, they knocked it down to about 33.5. Hand-coding the resulting code for CUDA got them to 10.2 – a very significant speed-up. Taking the same rewritten code and running it through OpenACC gave them a 10.6 runtime – very close to hand-coded performance.

The cool thing about OpenACC is that it’s portable and chip agnostic. Using it will enable better parallelism on general purpose multi-core CPUs as well as GPU accelerators. Here’s the NVIDIA press release with some more details.

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
The cloud that goes puff: Seagate Central home NAS woes
4TB of home storage is great, until you wake up to a dead device
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Internet Security Threat Report 2014
An overview and analysis of the year in global threat activity: identify, analyze, and provide commentary on emerging trends in the dynamic threat landscape.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.