Feeds

Cash-strapped students hungrily eye up old, unloved racks

Student clusterers in flop-per-dollar face-off

Build a business case: developing custom apps

SCC'13 How much supercomputing can you do with $2,500 worth of hardware? The four teams competing in the SC13 Student Cluster Commodity Track Competition will answer precisely this question - and more.

The Commodity Track is a new addition to the SC Student Cluster Competition event this year.

We all know and love the Standard Track: university teams build the fastest cluster they can, then compete live at the show to see who can turn in the best numbers on a set of HPC applications. The only limit on Standard Track competitors is the 26 amp (115 volt) power cap and the requirement that their gear fits in one rack.

In the Commodity Track, getting the most bang for your buck is the name of the game. Competitors have $2,500 they can use to buy components to build a true HPC cluster.

The teams in the Commodity Track will be required to run the exact same applications as the Standard Track big iron teams. These apps include the HPCC benchmark (with a separate LINPACK), NEMO5, WRF, GraphLab, and a “Mystery App” that will be revealed during the competition. (We’ll be discussing these apps in an upcoming article.)

Commodity Track rules of the road

The rules are pretty simple: teams have to field configurations of at least two nodes, and they can’t use more than 15 amps (atv115 volts) to power their creations. The components they use have to be commercially available but don’t have to be brand-new. So teams could scour Ebay, Craigslist, and even garage sales in tech-centric neighborhoods to find deals on retired gear.

However, competitors do have to provide organisers with a complete breakdown on their parts, sources, and prices. This is to ensure that everyone stays under the $$ limit.

Your money or your flops

Figuring out how to spend the money to get the most flops/dollar might seem simple on the surface. But as event organiser Daniel Kamalic commented recently, “In this track of the competition, you’re going to see some very (very!) creative solutions and approaches. These really exemplify the spirit of what we’re trying to develop in the next generation of computational scientists and engineers.”

Consider the different options for a moment. Given the cash constraints, we’re not going to see much in the way of fancy Infiniband interconnects – it’s probably going to be 1GbE across the board. But that decision still leaves many questions unanswered.

What kind of nodes should they look for? At the high end, they could buy a dual-socket board for as little as $270. For this, they’d get six memory slots, one x16 PCI slot, and a single GbE LAN slot.

Adding a couple of CPUs (Xeons in this case) at $210 to $230 each drives the cost per node to around $700 – just for the CPU and motherboard.

Since you need two nodes to compete, you’d have to commit $1,400 of your budget just to the motherboard/CPU combo. This leaves only $1,100 for an enclosure, cables, power supplies, memory, storage, switches, and all the other bits. Is that enough money to bring the cluster to life?

At the low end of the spectrum, they could pick up some inexpensive, single-socket motherboards for as little as $50. These boards would have two memory slots, a single GbE LAN connection, and a single x16 PCI slot. The starting cost for a dual-core CPU (socket LGA 1155, for example) is around $60 each. They can go with a quad-core processor at about $200 each.

With single-core nodes, the motherboard/CPU cost could be as low as $110 each. Two of these nodes would total $200 vs. the $700 cost of the single dual-socket node we discussed a few paragraphs back. Of course, we’d still need to add power supplies, memory, disk, etc., and some of these costs will be higher because you need multiple items – like dual power supplies and cables – to support two nodes vs. a single node.

Another consideration is whether to try to jack up performance by adding some GPUs to the mix. Even the least expensive NVIDIA GPUs these days are CUDA compatible. For example, the Tesla-based GeForce 210 retails for around $30, but still packs a decent number-crunching punch.

Doubling GPU spending to $60 per card will get you a Fermi-based GT630 that should deliver close to 2x the performance of the Tesla cards. NVIDIA has a handy table showing relative performance values for their GPUs here.

Teams could also practise the formerly black art of overclocking their CPUs and/or GPUs. To me, overclocking isn’t as scary as it once was; there are plenty of helpful "how to" guides in internet land. But, of course,none of these guides will guarantee that you won't fry your chips. Doing a significant overclock means the teams will have to pay more attention to their motherboard/CPU combinations and will certainly have to increase their cooling capacity.

If I were heading down the overclocking road, I’d configure in some liquid cooling or maybe immerse the whole damn thing in a vat of mineral oil. Sure, you’d have to strip off the fans and remote connect the drives (or seal them up), but you’d take heat off the table as a factor. Add in a cheap pump, a junkyard radiator, some tubing, a fan, and there you go.

Given the same $2,500 budget, what would you build? Would you go with dual socket nodes or mini board single socket motherboards? How many and what kind of GPUs would you add? And would you go for broke and overclock it? ®

Boost IT visibility and business value

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Scale data protection with your virtual environment
To scale at the rate of virtualization growth, data protection solutions need to adopt new capabilities and simplify current features.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?