Feeds

Stanford super runs million-core calculation

'Sequoia' focuses its attention on fluid dynamics problem

Next gen security for virtualised datacentres

Stanford University engineers are claiming a record for the year-old Sequoia supercomputer, after running up a calculation that used more than a million of the machine’s cores at once.

The work was conducted by the university’s Centre for Turbulence Research, seeking to get a model for supersonic jet noise that’s more sophisticated than “wow, that’s loud”. The predictive simulations the centre conducts contribute to designing quieter engines by providing input into components such as nozzle shape.

The trick is getting the models to run quickly enough: and it was the search for speed that led the researchers to get to work getting their code to run across so many cores in parallel.

Sequoia has more than 1. 5 million cores and 1.6 Petabytes of storage, which when it was first installed made it the world’s most powerful supercomputer. However, in normal use, that power is spread across a bunch of different workloads.

As Stanford’s announcement notes: “CFD [computational fluid dynamics] simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be.”

If anything’s wrong with any of the million bits of code running at once, the simulation is at best slowed down. The researchers first spent weeks “ironing out the wrinkles”, the university said, before kicking off the final simulation, starting with “full-system scaling” before finally watching the CFD simulation reach the million-core target.

The work, led by research associate Joseph Nichols, was described by the centre’s director Parviz Moin like this:

“Computational fluid dynamics (CFD) simulations, like the one Nichols solved, are incredibly complex. Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed.”

One of its outputs is shown below. Exhaust temperatures are shown in red, noise in blue, and the grey object at the left is a new nozzle design, with chevrons in the nozzle designed to reduce noise. ®

Stanford's jet-noise simulation needed a milliion cores

Image courtesy of the Center for Turbulence Research, Stanford University

Build a business case: developing custom apps

Whitepapers

Best practices for enterprise data
Discussing how technology providers have innovated in order to solve new challenges, creating a new framework for enterprise data.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?