Feeds

Jaguar to Titan? Not so bad…

Keptacular Metamorphis

Internet Security Threat Report 2014

At SC11 I had the opportunity to talk to some of the people responsible for the biggest computer upgrade known to man. Oak Ridge National Labs is upgrading its current Cray XT5 ‘Jaguar’ system to a Cray XT6 system that will be known as ‘Titan’.

It’s quite a facelift. Today, Jaguar is a 1.75 PFlop supercomputer with more than 18,000 nodes containing 224,162 cores of AMD 6-core Istanbul processors. In 2009, it was the first system to provide greater than a petaflop sustained performance, taking the number one slot on the Top500 list.

Two years later it’s not exactly a performance dog, but it’s been knocked down to number three on the list, supplanted by the Fujitsu 10.5 PFlop K Computer and China’s NUDT 2.56 PFlop Tianhe-1A.

The transition from Jaguar to Titan will be profound, with a performance boost to somewhere around 20 PFlops – which should put it somewhere near the top, it not the pinnacle, of the Top500. The biggest factor in the upgrade will be the move from a traditional CPU-based architecture to a hybrid CPU+GPU design.

In final form, which will be achieved next year, each of the 18,000+ Titan nodes will have one 16-core AMD Interlagos processor and a NVIDIA Kepler GPU accelerator. Titan will have many more CPU cores than Jaguar and the additional power provided by adding 18,000 Kepler GPUs in the mix. This will make Titan the largest hybrid supercomputer in the world – not just “GPU-riffic” but “Keptacular” as well. “Keptastic,” perhaps?

The biggest hurdle here isn’t the hardware; it’s the software, right? How the hell do you CUDA-ize the hundreds of applications and millions of lines of code that are running on Jaguar and will need to run on Titan? Not surprisingly, Cray and pals NVIDIA, PGI, and CAPS have been pondering this one. They’ve come up with OpenACC, and are presenting it as a parallel programming standard.

What OpenACC does is allow programmers to insert ‘directives’ into their code that will alert the compiler to routines that should be parallelized – sent to multiple cores or to accelerators. The compiler does the work, and the programmer doesn’t have to change any of the underlying code (other than adding directives, that is, and there are tools that help them do this too.)

I don’t pretend to know any of the ins and outs of writing parallel applications (well, I do pretend to know it if I’m certain that I’m talking to people who are dumber than I am), but a presentation from Cray’s John Levesque gave me some idea of how well OpenACC works.

One of his examples was the relative performance of their CAM-SE when using different methods to gain CUDA-ization. On the current system, the CAM-SE REMAP function took 65.30 minutes to complete. Or was it seconds? (He went damned fast in the presentation, and I was in the back, but we’re talking relative performance.)

After a rewrite in anticipation for porting to an accelerator, they knocked it down to about 33.5. Hand-coding the resulting code for CUDA got them to 10.2 – a very significant speed-up. Taking the same rewritten code and running it through OpenACC gave them a 10.6 runtime – very close to hand-coded performance.

The cool thing about OpenACC is that it’s portable and chip agnostic. Using it will enable better parallelism on general purpose multi-core CPUs as well as GPU accelerators. Here’s the NVIDIA press release with some more details.

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
Intel, Cisco and co reveal PLANS to keep tabs on WORLD'S MACHINES
Connecting everything to everything... Er, good idea?
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.