Feeds

US funds exascale computing journey

While Torvalds and Patterson get hot

Next gen security for virtualised datacentres

Thanks to $7.4m in government funding a pair of national labs hope to throw their big brains at the most pressing problems facing supercomputer designers.

Sandia and Oak Ridge national laboratories this week touted their new Institute for Advanced Architectures (IAA), which will explore what it takes to create "Exascale" machines. The researchers will tackle issues such as power, many-core processors, multi-threaded code and communications between the components in the largest of supercomputers. Breakthroughs in any or all of these areas should benefit the National Nuclear Security Administration and the Department of Energy’s Office of Science, which will support the work.

Just this week, the Texas Advanced Computer Center (TACC) did some ribbon cutting on a supercomputer said to mark a "new era for petascale science." That claim to fame comes from the "Ranger" machine's ability to hit 504 teraflops - or half a petaflop - of peak performance. Exascale computers would take things once again to the next level with an exaflop coming in 1,000 times faster than a petalop. So, exaflop computers could crank through a million trillion calculations per second.

The IAA will concentrate on closing the gap between peak and sustained performance for exascale supercomputers. Part of that mission will revolve around making sure that all processors in a supercomputer stay active working on problems. And that's a particularly hairy issue when you consider that today's top supercomputers run on tens of thousands and even hundreds of thousands of cores - figures that will increase in coming years due to the rise of multi-core processors.

"In an exascale computer, data might be tens of thousands of processors away from the processor that wants it,” says Sandia computer architect Doug Doerfler. "But until that processor gets its data, it has nothing useful to do. One key to scalability is to make sure all processors have something to work on at all times."

Keeping processors busy will require novel parallel programming techniques along with improved internal communications systems.

"In order to continue to make progress in running scientific applications at these [very large] scales,” says Jeff Nichols, who heads the Oak Ridge branch of the institute, “we need to address our ability to maintain the balance between the hardware and the software. There are huge software and programming challenges and our goal is to do the critical R&D to close some of the gaps.”

The labs will also tackle the nagging issue of power consumption for large machines. Similar work is also taking place at IBM, Lawrence Berkeley National Lab and a variety of other research institutes.

Famed Berkeley researcher Dave Patterson - he of RISC and RAID fame - is also spearheading research into novel programming techniques that could benefit supercomputer class machines as well as more standard boxes running on multi-core chips. Patterson's Parallel Computing lab recently took in $10m from Microsoft and Intel.

Berkeley's win caught the eye of Linux kernel writer Linus Torvalds who started complaining about the parallel computing research on a message board. Patterson fought back, although mustering any rebuttal seemed a rather hopeless task since Torvalds failed to grasp the concepts of research and effort. If you want to hear more about Patterson's vision of the future, we have the show for you. ®

Gartner critical capabilities for enterprise endpoint backup

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?