Feeds

Researchers break records with MILLION-CORE calculation

One app, million cores – but it wasn't Crysis...

Internet Security Threat Report 2014

HPC blog Stanford’s Engineering Center for Turbulence Research (SECTR) has claimed a new record in computer science by running a fluid dynamics problem using a code named CharLES that utilised more than one million cores in the hulking great IBM Sequoia at once.

According to the Stanford researchers, it’s the first time this many cores have been devoted to a fluid simulation. In this case, the boffins were modeling jet engine exhaust in an attempt to reduce the noise during takeoffs and landings.

If you need a million-core system to run your code, there aren’t a lot of choices today. In fact, there are only two million-core plus supercomputers that we know of: 1) Oak Ridge’s AMD/NVIDIA-based Titan and 2) Lawrence Livermore National Lab’s Bluegene/Q-based Sequoia. The Stanford guys used the 1,572,000-core Sequoia system, probably because it’s an easy drive from Palo Alto to Livermore, CA. (Head over the Dunbarton Bridge, then take the 880 to the 580. That’s how I’d go.)

The computer code used in this study is named CharLES and was developed by former Stanford senior research associate, Frank Ham. This code utilizes unstructured meshes to simulate turbulent flow in the presence of complicated geometry.

Stanford University also alluded to the difficulty inherent in pushing applications to this scale. I was surprised to read that the combined Stanford/LLNL team was able to pull this off with only “a few weeks” of planning and tuning. That’s definitely a resume-worthy achievement. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Seattle children’s accelerates Citrix login times by 500% with cross-tier insight
Seattle Children’s is a leading research hospital with a large and growing Citrix XenDesktop deployment. See how they used ExtraHop to accelerate launch times.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?