Feeds

SUPERCOMPUTER vs your computer in bang-for-buck battle

iPad 2 pwns Cray-2? Wife’s desktop beats all?

Build a business case: developing custom apps

HPC blog A couple of weeks ago I posted a blog here (Exascale by 2018: Crazy...or possible?) that looked at how long it took the industry to hit noteworthy HPC milestones. Chatter in the comments section (aside from the guy who assailed me for a typo, and for not explicitly calling out ‘per second’ denotations) discussed what these massive systems do and why they’re necessary.

But Reg readers' comments, plus others that I received via Twitter, raised some interesting questions that I’m going to attempt to answer – or at least sort of answer. The first is: just how much did these systems cost new?

When these systems came out, they were the biggest and baddest supercomputers in the world. But the price tag that the vendor attaches to a system in a press release and the actual price paid by the customer may have little or no relationship to each other or what the system cost to develop and build.

The price also varies depending on when in the product lifecycle you purchase the system. Buying the first one doesn’t mean that you’re necessarily paying the top price. If you’re the kind of customer who might buy boatloads of them, you would probably get a break. It also helps if you’re on the understanding side when it comes to performance qualification and bug fixes. Plus the right customer can validate a design, and that’s worth something to vendors.

supercomputing_no_1

In the table above, I did my best to find representative early-life prices for each system. It was easier to find prices for the later systems than for the CDC and Cray boxes. I found ranges of prices for the CDC and Cray-2 systems, so I took the average of those figures.

The final column adjusts those prices to 2010 dollars to level the playing field. Even though the cost of computing has gone down incredibly (as we’ll see below), the cost of BIG computing – the cost of the fastest system in the world – has increased considerably from the $50m CDC 6600 to the $101m IBM Roadrunner. The K computer is a bit of a special case. The $1.25bn figure supposedly represents the cost of design, development and the actual gear – but I don’t know if it’s an apples-to-apples comparison to the others.

The second theme among readers’ comments was: how do these levels of performance (and associated prices) relate to the systems that we use day in and day out? This required some more Jethro Bodine ciphering time; I figured I’d benchmark some of the systems in our offices and see how they came out.

I wanted to use Linpack, so I first needed to find a distribution that works on our Windows 7 systems here. Yeah, yeah, I know that I should set up a dual boot with Linux and then run a ‘real’ Linpack in order to get better numbers, but I do have a regular day job.

Intel has a downloadable Linpack benchmark here that I put on three of our office systems. After perusing the documentation, I ran through some trial runs with different problem sizes in order to establish a performance range. What I found is that, on our systems at least, using the largest ‘typical’ problem set of 40,000 equations seemed to pull out the best Linpack average and peak results.

Our pal Jack Dongarra, one of the founders of the Top500 list, ran Linpack on an Apple iPad 2 and reported that the tablet hit between 1.5-1.65 GFLOP/s, which is higher than the Cray-2 back in 1985.

In the New York Times story, he also discussed the possibility of clustering iPads into a competitive supercomputer. He didn’t seem to feel that it would be a good price performer when compared to existing supercomputers, something that my research below confirms.

Boost IT visibility and business value

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Scale data protection with your virtual environment
To scale at the rate of virtualization growth, data protection solutions need to adopt new capabilities and simplify current features.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?