Students smash competitive clustering LINPACK world record

The kids are all right

By Dan Olds, OrionX

Posted in HPC, 3rd July 2017 08:01 GMT

HPC Blog Enormous happenings at the ISC17 Student Cluster Competition, where students from the Friedrich-Alexander-Universitat (FAU) broke the student cluster competition world record for HPL (LINPACK).

This gives the home court German team the coveted Highest LINPACK award.

This marks the first time a German team has won a major performance award on their home soil and is likely to result in the declaration of a national holiday featuring parades and a statue raising in their home town of Nuremburg.

Lins Packed to the Max

On the LINPACK benchmark, it was a pretty close grouping between the top five finishers, with FAU grabbing the top slot with a score less than 10 per cent better than second place Purdue/NEU.

Both teams were sporting GPU-heavy systems. Purdue/NEU had two nodes that hosted an eye-popping 16 NVIDIA P100 GPUs, while FAU took a conservative approach with just using 12 P100s in their dual node box. (Yeah, 12 P100s is conservative….lol)

Pre-competition, the FAU team confided to me that they had tested the 16 GPU configuration and found that their results were actually lower than using fewer GPUs. They speculated that 16 GPUs was simply too much for the PCI bus to handle and that the resulting congestion on the bus was what was slowing their results.

Purdue/NEU didn’t have much time with their machine, so they didn’t have the opportunity to test all of the possible configurations, which is probably why they missed out on the FAU discovery.

EPCC was right in there with their "rule of three" cluster – three nodes, nine GPUs, plus liquid cooling. I would have liked to see them take advantage of their liquid cooling by overclocking their GPUs. I think that would have made the difference between their third place finish and grabbing the LINPACK trophy. But what the hell do I know? I’ve never built a cluster.

Every one of the top four teams beat the existing HPL record of 30.71, established at the ASC17 competition in Wuxi. Nanyang just barely finished below that mark.

Historically, here’s how the LINPACK numbers look in context:

As you can see, scores took a huge leap at SC16 when NVIDIA P100 GPUs came onto the scene. We saw another small improvement at ASC17, but then another largish jump at ISC17.

I’m not sure why we saw such a big bump this time around, but I think it has something to do with the form factors that the winning teams were using. Eight GPUs on a single node allows the GPUs to communicate much more effectively with each other and the host processors vs. having eight GPUs on four different nodes.

The students at ISC17 have also broken the student HPCG record, so stay tuned…

Sign up to our NewsletterGet IT in your inbox daily

14 Comments

More from The Register

Huawei's 4-socket HPC blade server gruntbox gets Skylake mills

Beefier grunts from Chipzilla's latest and greatest

Want to know more about HPC apps? This explicit vid has some answers

HPC Blog Page through this profiler...

Stephen Hawking's boffin buds buy HPE HPC to ogle universe

But can COSMOS find a way to improve HPE profits? Hmmm

HPE gets carried array with HPC: Partners up with DDN

+Comment High-performance servers get data-pumping storage arrays

HPE teases HPC punters with scalable gear

No you can't have full specs or pricing until next month

Teen Pennsylvania HPC storage pusher Panasas: Small files, fat nodes, sharp blades

Analysis Not your average Silicon Valley startup kid

Love cloudy HPC? Microsoft does, slurps Cycle Computing

CPU-wrangler found fame on EC2, becomes Azure business

Ah, breathe that fresh alpine air. And look over there, a majestic HPC Advisory Council

HPC Blog Beastmode crew hosts three-day conference in Lugano, Switzerland

UK splurges £20m on six regional HPC centres

Why? For science!

ARM buys HPC software specialist Allinea to help devs code good

If Intel and IBM think HPC is a two-horse race, they need to think again