Original URL: https://www.theregister.com/2013/01/29/million_core_milestone/

Researchers break records with MILLION-CORE calculation

One app, million cores – but it wasn't Crysis...

By Dan Olds

Posted in HPC, 29th January 2013 10:18 GMT

HPC blog Stanford’s Engineering Center for Turbulence Research (SECTR) has claimed a new record in computer science by running a fluid dynamics problem using a code named CharLES that utilised more than one million cores in the hulking great IBM Sequoia at once.

According to the Stanford researchers, it’s the first time this many cores have been devoted to a fluid simulation. In this case, the boffins were modeling jet engine exhaust in an attempt to reduce the noise during takeoffs and landings.

If you need a million-core system to run your code, there aren’t a lot of choices today. In fact, there are only two million-core plus supercomputers that we know of: 1) Oak Ridge’s AMD/NVIDIA-based Titan and 2) Lawrence Livermore National Lab’s Bluegene/Q-based Sequoia. The Stanford guys used the 1,572,000-core Sequoia system, probably because it’s an easy drive from Palo Alto to Livermore, CA. (Head over the Dunbarton Bridge, then take the 880 to the 580. That’s how I’d go.)

The computer code used in this study is named CharLES and was developed by former Stanford senior research associate, Frank Ham. This code utilizes unstructured meshes to simulate turbulent flow in the presence of complicated geometry.

Stanford University also alluded to the difficulty inherent in pushing applications to this scale. I was surprised to read that the combined Stanford/LLNL team was able to pull this off with only “a few weeks” of planning and tuning. That’s definitely a resume-worthy achievement. ®