Feeds

US funds exascale computing journey

While Torvalds and Patterson get hot

Boost IT visibility and business value

Thanks to $7.4m in government funding a pair of national labs hope to throw their big brains at the most pressing problems facing supercomputer designers.

Sandia and Oak Ridge national laboratories this week touted their new Institute for Advanced Architectures (IAA), which will explore what it takes to create "Exascale" machines. The researchers will tackle issues such as power, many-core processors, multi-threaded code and communications between the components in the largest of supercomputers. Breakthroughs in any or all of these areas should benefit the National Nuclear Security Administration and the Department of Energy’s Office of Science, which will support the work.

Just this week, the Texas Advanced Computer Center (TACC) did some ribbon cutting on a supercomputer said to mark a "new era for petascale science." That claim to fame comes from the "Ranger" machine's ability to hit 504 teraflops - or half a petaflop - of peak performance. Exascale computers would take things once again to the next level with an exaflop coming in 1,000 times faster than a petalop. So, exaflop computers could crank through a million trillion calculations per second.

The IAA will concentrate on closing the gap between peak and sustained performance for exascale supercomputers. Part of that mission will revolve around making sure that all processors in a supercomputer stay active working on problems. And that's a particularly hairy issue when you consider that today's top supercomputers run on tens of thousands and even hundreds of thousands of cores - figures that will increase in coming years due to the rise of multi-core processors.

"In an exascale computer, data might be tens of thousands of processors away from the processor that wants it,” says Sandia computer architect Doug Doerfler. "But until that processor gets its data, it has nothing useful to do. One key to scalability is to make sure all processors have something to work on at all times."

Keeping processors busy will require novel parallel programming techniques along with improved internal communications systems.

"In order to continue to make progress in running scientific applications at these [very large] scales,” says Jeff Nichols, who heads the Oak Ridge branch of the institute, “we need to address our ability to maintain the balance between the hardware and the software. There are huge software and programming challenges and our goal is to do the critical R&D to close some of the gaps.”

The labs will also tackle the nagging issue of power consumption for large machines. Similar work is also taking place at IBM, Lawrence Berkeley National Lab and a variety of other research institutes.

Famed Berkeley researcher Dave Patterson - he of RISC and RAID fame - is also spearheading research into novel programming techniques that could benefit supercomputer class machines as well as more standard boxes running on multi-core chips. Patterson's Parallel Computing lab recently took in $10m from Microsoft and Intel.

Berkeley's win caught the eye of Linux kernel writer Linus Torvalds who started complaining about the parallel computing research on a message board. Patterson fought back, although mustering any rebuttal seemed a rather hopeless task since Torvalds failed to grasp the concepts of research and effort. If you want to hear more about Patterson's vision of the future, we have the show for you. ®

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Gamma's not a goner! UK ISP sorts out major outage
Says BT is the root of the problem
prev story

Whitepapers

Gartner critical capabilities for enterprise endpoint backup
Learn why inSync received the highest overall rating from Druva and is the top choice for the mobile workforce.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.