Feeds

Euro students cluster fest: Configurations LAID BARE

Universities spark Kepler vs Phi Face-Off

Choosing a cloud hosting partner with confidence

HPC blog The configurations of the systems to be used by the young HPC warriors in the 2013 International Supercomputing Conference's Student Cluster Challenge were released last week, but they have now been verified to make sure that last-minute changes are accurately represented.

Here’s the "big table" showing some of the details for each team: (click to enlarge)

After eying the table above, here are some points that jumped out at me:

  1. Accelerators are good: Everyone is using accelerators this year, either NVIDIA GPUs or Intel Phi co-processors. We’ve seen accelerator use steadily increase since 2010 when they made their first appearance. In that first year, teams using GPUs did "OK", but their apps weren’t optimiSed to a large enough degree to get much benefit from them. But since 2011, the teams with the best results all used accelerators.
  2. How much is too much?: Can you have too much of a good thing? Team Chemnitz is putting this to the test with their “Coffee Table of Doom”, which consists of four workstations, each with FOUR accelerators. The team installed two Xeon Phi and two NVIDIA K20 cards in each box. One thing they’ve found is that using all of them at once requires a lot of juice – well over the 3,000 watt limit.
  3. How much is too much part ii: Team Tsinghua and Team South Africa both have more nodes (eight) than any other competitor. Team Tsinghua has maxed out their cluster memory with an astounding 1TB. The average system in the field (not including Tsinghua) sports a little over 400GB of RAM, so Tsinghua is more than doubling that with their full TB.

    I’ve always believed in the "more memory – more better" approach, but I’m a little less sure about having more nodes than the other teams. I could put together a mumbling, fumbling, and inarticulate argument either in favor of more nodes (reduced chance of I/O and interconnect contention) or against more nodes (higher electrical use to support extra chassis and interconnect).

I’ll be posting interviews with each team, as usual, and I’m going to ask them about their experiences with the GPUs and co-processors. Did they have any problems either getting or optimising any of the applications? Or was it smooth sailing?

It also occurs to me that this competition will be the first head-to-head Phi vs Kepler battle. Students will be running the same applications on nearly the same hardware and have to contend with a 3,000 watt power cap. I don’t expect to see a clear-cut winner in this burgeoning Intel vs NVIDIA war, but it will be interesting to check out the detailed results to see if there are any conclusions we can draw. ®

New hybrid storage solutions

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.