Feeds

US funds exascale computing journey

While Torvalds and Patterson get hot

Internet Security Threat Report 2014

Thanks to $7.4m in government funding a pair of national labs hope to throw their big brains at the most pressing problems facing supercomputer designers.

Sandia and Oak Ridge national laboratories this week touted their new Institute for Advanced Architectures (IAA), which will explore what it takes to create "Exascale" machines. The researchers will tackle issues such as power, many-core processors, multi-threaded code and communications between the components in the largest of supercomputers. Breakthroughs in any or all of these areas should benefit the National Nuclear Security Administration and the Department of Energy’s Office of Science, which will support the work.

Just this week, the Texas Advanced Computer Center (TACC) did some ribbon cutting on a supercomputer said to mark a "new era for petascale science." That claim to fame comes from the "Ranger" machine's ability to hit 504 teraflops - or half a petaflop - of peak performance. Exascale computers would take things once again to the next level with an exaflop coming in 1,000 times faster than a petalop. So, exaflop computers could crank through a million trillion calculations per second.

The IAA will concentrate on closing the gap between peak and sustained performance for exascale supercomputers. Part of that mission will revolve around making sure that all processors in a supercomputer stay active working on problems. And that's a particularly hairy issue when you consider that today's top supercomputers run on tens of thousands and even hundreds of thousands of cores - figures that will increase in coming years due to the rise of multi-core processors.

"In an exascale computer, data might be tens of thousands of processors away from the processor that wants it,” says Sandia computer architect Doug Doerfler. "But until that processor gets its data, it has nothing useful to do. One key to scalability is to make sure all processors have something to work on at all times."

Keeping processors busy will require novel parallel programming techniques along with improved internal communications systems.

"In order to continue to make progress in running scientific applications at these [very large] scales,” says Jeff Nichols, who heads the Oak Ridge branch of the institute, “we need to address our ability to maintain the balance between the hardware and the software. There are huge software and programming challenges and our goal is to do the critical R&D to close some of the gaps.”

The labs will also tackle the nagging issue of power consumption for large machines. Similar work is also taking place at IBM, Lawrence Berkeley National Lab and a variety of other research institutes.

Famed Berkeley researcher Dave Patterson - he of RISC and RAID fame - is also spearheading research into novel programming techniques that could benefit supercomputer class machines as well as more standard boxes running on multi-core chips. Patterson's Parallel Computing lab recently took in $10m from Microsoft and Intel.

Berkeley's win caught the eye of Linux kernel writer Linus Torvalds who started complaining about the parallel computing research on a message board. Patterson fought back, although mustering any rebuttal seemed a rather hopeless task since Torvalds failed to grasp the concepts of research and effort. If you want to hear more about Patterson's vision of the future, we have the show for you. ®

Security for virtualized datacentres

More from The Register

next story
Just don't blame Bono! Apple iTunes music sales PLUMMET
Cupertino revenue hit by cheapo downloads, says report
The DRUGSTORES DON'T WORK, CVS makes IT WORSE ... for Apple Pay
Goog Wallet apparently also spurned in NFC lockdown
IBM, backing away from hardware? NEVER!
Don't be so sure, so-surers
Hey - who wants 4.8 TERABYTES almost AS FAST AS MEMORY?
China's Memblaze says they've got it in PCIe. Yow
Microsoft brings the CLOUD that GOES ON FOREVER
Sky's the limit with unrestricted space in the cloud
This time it's SO REAL: Overcoming the open-source orgasm myth with TODO
If the web giants need it to work, hey, maybe it'll work
'ANYTHING BUT STABLE' Netflix suffers BIG Europe-wide outage
Friday night LIVE? Nope. The only thing streaming are tears down my face
Google roolz! Nest buys Revolv, KILLS new sales of home hub
Take my temperature, I'm feeling a little bit dizzy
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
New hybrid storage solutions
Tackling data challenges through emerging hybrid storage solutions that enable optimum database performance whilst managing costs and increasingly large data stores.
Reducing the cost and complexity of web vulnerability management
How using vulnerability assessments to identify exploitable weaknesses and take corrective action can reduce the risk of hackers finding your site and attacking it.