Let's build some better HPC benchmarks
Sustained performance in, Peak out
SC11 As part of our preview of the upcoming SC11 event, I had a conversation last week with Jim Costa and Bill Kramer, the co-chairs of the SC11 Technical Committee. Conveniently, I recorded this conversation and even ran up some slides to guide us through various topics. The resulting webcast covers a lot of ground and shows the range of what you can see at SC in Seattle (or online if you can’t make the trip).
SC (it used to be called ‘Supercomputing’) is unlike a traditional industry trade show. It has different roots than other long-running industry gatherings, starting in 1988 as an event where doctoral candidates, grad students, and researchers could present their research findings and get them reviewed. This is still a large component of the event, with 353 papers submitted for SC11 (an increase of about 100 vs. 2010).
Over time, SC has become the premier event for high performance computing (HPC) in particular and, more and more, computer based scientific research in general. It has expanded to add user tutorials, panel discussions, and the trade show floor.
In the webcast we touch upon the make-up of the technical program and what’s new this year. One of the most interesting new additions is the “State of the Practice” venue, which gives users a place to exchange ideas about how to improve their HPC performance, management, and deployments.
There will be scheduled daily sessions with users presenting reports covering their own challenges and the best practices they’ve come up with to deal with them. Here’s a link with more info.
If you look at the State of the Practice link above, you’ll see that there are a lot of sessions surrounding the topic of performance measurement. This plays into one of the major themes of the SC11 - Sustained Performance.
This is SC’s attempt to move the focus in HPC from performance benchmarks such as LINPACK to measuring usable output from systems. A benchmark like LINPACK measures only limited aspects of a system that often don’t line up very well with what users are doing in the real world.
As part the Sustained Performance agenda, a series of sessions presents techniques for measuring actual thoroughput, modeling anticipated throughput, and discussing different approaches for both. Give the webcast a listen for more SC11 Technical Program details and discussion.
Sponsored: Global DDoS threat landscape report