Feeds

Euro students cluster fest: Configurations LAID BARE

Universities spark Kepler vs Phi Face-Off

Boost IT visibility and business value

HPC blog The configurations of the systems to be used by the young HPC warriors in the 2013 International Supercomputing Conference's Student Cluster Challenge were released last week, but they have now been verified to make sure that last-minute changes are accurately represented.

Here’s the "big table" showing some of the details for each team: (click to enlarge)

After eying the table above, here are some points that jumped out at me:

  1. Accelerators are good: Everyone is using accelerators this year, either NVIDIA GPUs or Intel Phi co-processors. We’ve seen accelerator use steadily increase since 2010 when they made their first appearance. In that first year, teams using GPUs did "OK", but their apps weren’t optimiSed to a large enough degree to get much benefit from them. But since 2011, the teams with the best results all used accelerators.
  2. How much is too much?: Can you have too much of a good thing? Team Chemnitz is putting this to the test with their “Coffee Table of Doom”, which consists of four workstations, each with FOUR accelerators. The team installed two Xeon Phi and two NVIDIA K20 cards in each box. One thing they’ve found is that using all of them at once requires a lot of juice – well over the 3,000 watt limit.
  3. How much is too much part ii: Team Tsinghua and Team South Africa both have more nodes (eight) than any other competitor. Team Tsinghua has maxed out their cluster memory with an astounding 1TB. The average system in the field (not including Tsinghua) sports a little over 400GB of RAM, so Tsinghua is more than doubling that with their full TB.

    I’ve always believed in the "more memory – more better" approach, but I’m a little less sure about having more nodes than the other teams. I could put together a mumbling, fumbling, and inarticulate argument either in favor of more nodes (reduced chance of I/O and interconnect contention) or against more nodes (higher electrical use to support extra chassis and interconnect).

I’ll be posting interviews with each team, as usual, and I’m going to ask them about their experiences with the GPUs and co-processors. Did they have any problems either getting or optimising any of the applications? Or was it smooth sailing?

It also occurs to me that this competition will be the first head-to-head Phi vs Kepler battle. Students will be running the same applications on nearly the same hardware and have to contend with a 3,000 watt power cap. I don’t expect to see a clear-cut winner in this burgeoning Intel vs NVIDIA war, but it will be interesting to check out the detailed results to see if there are any conclusions we can draw. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Object storage bods Exablox: RAID is dead, baby. RAID is dead
Bring your own disks to its object appliances
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.