Students prep for battle of wits, MIPs and watts: It's student cluster-wrestling time
Kids face off against HPC industry professionals
SCC'13 The countdown is on. Twelve university teams are preparing for another epic battle at the student cluster-wrestling match, the SC13 Student Cluster Competition (SCC).
The gig will take place during the 2013 Supercomputing Conference in Denver, Colorado, beginning 18 November.
The teams will compete on the SC13 exhibition hall to see whose supercomputer is the fastest on a variety of HPC benchmarks and workloads. There are two competition tracks this year: Big Iron and Commodity Iron.
But first, here’s some breaking cluster news: For the first time ever, there will be a special “Celebrity Pro-Am Cluster Challenge” pitting HPC industry professionals against the student competitors to see who can reign supreme on a surprise application. More details on this as they become available.
Big iron or cheap iron?
The Big Iron competitors can bring as much hardware as their hearts desire (and sponsors will donate), but they can’t have a configuration that exceeds a 26-amp power limit when the system is fired up with a running operating system. Students in both tracks can run and tune the OS of their choice and use whatever compilers, schedulers, and other software they need.
The Commodity track competitors can run any off-the-shelf (or “off the wall”, according to the competition overview) hardware they like. But the cost of their entire cluster can’t exceed $2,500, and they can’t draw more than 15 amps of juice. Teams will have to disclose the source and retail price of their components for verification by the competition committee. This means that they won’t be able to gain competitive advantage by having their Aunt Biddy “sell” them a brace of Tesla or Phi cards for $5 each and a hug.
Both sets of teams will run the same workloads and receive points for: system performance; the quality and precision of their results; and interviews with contest judges who assess how well the students understand their systems and workloads.
They’ve had since early spring to design their clusters, negotiate with a hardware sponsor to get gear they can use and figure out how to best configure, tune, and test their boxes. They’ve also been boning up on the applications they’ll have to run in the competition. (more on this in a later article)
All of this hard work comes into play at the SC13 show in Denver. This competition is a 48-hour marathon that begins on Monday, November 18 and ends on Wednesday afternoon. The students will be running full-out day and night, driving their machines and themselves to the limit – all while staying under their respective power caps.
Once the competition starts, students aren’t allowed to change their physical systems (except in the case of a component failure) and can’t physically power down any component.
Students also won’t be able to operate their systems from the comfort of their hotel rooms. While the clusters do have network connections, they are one-way only, meaning that the teams can’t launch jobs remotely.
Is it worth it?
So what’s the payoff for months of hard work? For starters: gallons of lukewarm coffee, dinners composed of snack food, and a bad case of sleep deprivation. The winning teams get bragging rights plus commemorative plaques.
But the real payoff is the priceless real-world experience they earn. These kids have learned how to design and bring up a cluster, then run a set of complex applications while maximizing throughput, stability, and energy usage. They also know how to work effectively within a team. These students have a skill set that puts them in high demand for the best jobs in both research and industry computing.
We’ll be kicking off our coverage of the SC13 Cluster Competition with our usual team profiles, a look at the applications (including a couple of webcasts where app experts toss out hints and tips), and a betting pool where readers can put money on their favourite team(s). Stay tuned, folks! ®
Sponsored: Benefits from the lessons learned in HPC