Feeds

US nuke lab goes back for BlueGene/Q seconds

A Vulcan rated at 5 petaflops

Top 5 reasons to deploy VMware with Tegile

Lawrence Livermore National Laboratory, one of the big nuke labs funded by the US Department of Energy, does a lot of super-secret classified nuclear weapons design and management work but it also lets the scientific community play with its biggest machines during shakedown phases and keeps around some iron that they can use on a regular basis.

Having just fired up the "Sequoia" BlueGene/Q massively parallel supercomputer, which ranked number one on the Top 500 supercomputer rankings this month at 16.32 petaflops of sustained performance on the Linpack Fortran benchmark test, LLNL went back to Big Blue to get a second helping. This summer, the nuke lab will be installing a second BlueGene/Q machine, nick-named "Vulcan," that will be comprised of 24 racks and deliver about one-quarter of the number-crunching power of the Sequoia machine.

The 'Vulcan' BlueGene/Q super at LLNL

The 'Vulcan' BlueGene/Q super at LLNL

As Jim Sexton, director of the Computational Science Center at IBM Research explained it an email exchange with El Reg, researchers that are getting to play with applications running on Sequoia now, during its shakedown, want to have a capacity-scale machine on which they can get some time once Sequoia goes behind the firewall and starts doing its real (and classified) work.

This is a much better way of doing things than was done with the "Roadrunner" hybrid Opteron-Cell machine at Los Alamos National Laboratory, the first petaflops-class box but one that was essentially a one-off. If you developed code for that box, once it went behind the firewall, as it did in October 2009, you were done.

In its six-month shakedown period, Los Alamos, also a DOE nuke lab, hosted the largest model of an expanding and accelerating universe every loaded onto a cluster to look for dark matter and dark energy. Roadrunner was also used to map genetic sequences to create an HIV family tree, simulated the interactions of lasers and plasmas as part of an effort to come up with controlled nuclear fusion, simulated how the single atoms moving around in nanowires can cause them to break or change their mechanical and electrical properties, and was used to run a program called Spasm, which simulated the interactions of multiple billions of atoms as shockwave stresses smash and deform the materials.

As far as El Reg knows, they did not play Crysis on Roadrunner, and no one is going to be playing it on either Sequoia or Vulcan, either.

The Vulcan machine will be administered by LLNL's High Performance Computing Innovation Center, which was announced last year as an adjunct to the Sequoia contract with IBM to take that six-month shakedown period and make it a permanent feature of the next-generation petaflopper installed at a DOE nuke lab so academic and corporate researchers could develop codes on a big, bad box and come up with software innovations that would not only solve real world problems, but help the nuke labs benefit from the work of other techies.

LLNL and IBM Research are also putting staff at the HPCIC to help those who get to play with Vulcan going, and they are soliciting collaborators in energy, materials science, manufacturing, and data management and informatics.

Sexton says that the codes developed by collaborators could be ported to smaller x86 clusters running Linux (which BlueGene?Q uses), but that the point was to get on a big, multi-petaflops box and really do some big and meaningful simulations – and do them more quickly. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Protecting against web application threats using SSL
SSL encryption can protect server‐to‐server communications, client devices, cloud resources, and other endpoints in order to help prevent the risk of data loss and losing customer trust.