Scotland wins WORLD RECORD as voters head to referendum polls

Uni of Edinburgh team lands teraflop-tastic LINPACK laurels


HPC blog The LINPACK portion of the ISC’14 Student Cluster Competition (LINK) was supposed to be routine, according to the cluster competition wise guys. Sure, some student team might set a new record, but no one was expecting the new mark to break through the 10 TFLOP/s barrier.

Almost everyone expected the LINPACK crown to go to one of the Chinese powerhouse teams, or maybe the power mad Chemnitz team, or even returning former champion South Africa.

As the students finished their LINPACK runs, rumours of a new 10 TFLOP+ result started to swirl around the show floor. The scuttlebutt was that Team Edinburgh had driven what looked to be a puny setup to an unimaginable 10.1 TFLOP/s.

In this case, the rumours were absolutely correct. Team Edinburgh pushed their four-node cluster, equipped with only 80 CPU cores, and less memory (256 GB) than any other competitor, through the 10 TFLOP/s LINPACK wall. In the process, they soundly topped the other competitors and secured a big slice of cluster competition glory for Edinburgh, Scotland, and the UK as a whole.

The secret to Edinburgh’s triumph has a lot to do with the design of their cluster. While their traditional node-CPU-memory configuration was definitely on the small side, they crammed their cluster to the gills with eight NVIDIA K40x Tesla GPUs.

While this config potentially gave them potentially a huge numeric processing punch, all of the other top finishers on LINPACK were also sporting a double brace (meaning eight) NVIDIA K40 GPUs.

The challenge for all of these teams was figuring out the best way to take advantage of their processing potential without going over the 3,000 watt power cap.

Every team with a large configuration (8 or more nodes) had to throttle down some part of their system in order to have enough wattage to fuel their GPUs. This usually meant slowing down their CPUs. But not Edinburgh. They, in fact, they jammed the throttle to the firewall on both their CPUs and GPUs, with nary a worry about the cap.

They were able to do this because they had a small configuration, but more importantly, they were able to do this because they were using liquid cooling.

Hot stuff running ice cold

Boston Group, their hardware sponsor, gave the team highly advanced liquid cooled gear that included a radiator that could probably handle the heat generated by the Edinburgh cluster with enough headroom for two or three more Edinburgh-sized boxes.

Using liquid cooling allowed the team to get rid of a bunch of fans (several per node), which gave them the ability to run everything flat out.

I wasn’t expecting much from the LINPACK competition at ISC’14. At the ASC’14 competition, held only a few months earlier, home team Sun Yat-Sen University had set a new record of 9.272 TFLOP/s. Since there hadn’t been any new hardware introductions (particularly faster CPUs or GPUs) since ASC in April, I didn’t see how anyone was going to significantly top Sun Yat-Sen’s score.

But it just goes to show that if there’s a will, there’s a way. Edinburgh wanted that LINPACK record, so they figured out a way to get it. Congratulations to them, their sponsors, and everyone in Scotland.

Don't forget the rest of the field

It was also good to see the plucky South African team nail down the second highest LINPACK, and home team Chemnitz grab third place. I wasn’t surprised to see Team Shanghai in the mix for LINPACK domination, but I knew they had set their sights on the Overall Championship, which dictates a more balanced approach on the system configuration.

Here’s how the Edinburgh result fits into Student Cluster Competition LINPACK history.

As you can see, they’ve significantly raised the bar for student LINPACK-iness with a increase of 8.9 over the previous record.

Looking at the same results, but from a GFLOP/s per watt perspective, the results look identical.

In fact, when looking at the numbers, the flops/watt increase is exactly the same 8.9% improvement as seen in the LINPACK scores themselves.

I was a bit surprised by this. I figured that the liquid cooling would yield a better flop/watt ratio vs. the air cooled systems.

However, my analysis could be flawed. I don't have figures for exactly how much power each team was using for the LINPACK run they submitted. All I know is that they were using an amount that was less than the 3,000 watt power cap.

I still think that the liquid cooled Edinburgh system was more efficient on a flop/watt basis, but I can't prove it. Much like my belief that carrot cake is not a real dessert - or my belief that I can win a fair fight with any dog in the world. ®

Sponsored: Detecting cyber attacks as a small to medium business


Biting the hand that feeds IT © 1998–2020