Benchmarks are $%#&@!!
Secrets and solutions from a reformed benchmarketer
At SC11 I ran into Henry Newman, CEO of HPC consulting firm Instrumental Inc. After exchanging the usual pleasantries and deeply offensive personal insults, we got to talking about some of the recently released benchmark results – and how irrelevant most of them are to the real world.
In the course of the conversation, Henry told me that he was once a “slimy benchmarker". He told me about some of the tricks that vendor benchmarkers use to make sure their systems shine brightly in customer bake-offs. Basically, if a particular trick (special tuning, hardware, etc.) “isn’t clearly and explicitly banned by the customer, then it’s fair game.”
I get the feeling that back in the day, Henry was a particularly savage and merciless benchmarker with a pretty good win/loss record. But now that he’s on the customer side of the industry, Henry has a different point of view, and wants to become part of the solution to the benchmarking problem (which he helped promulgate, damn it.)
Toward that end, he’s spent a lot of time working on a project with DARPA to develop a set of benchmarks that get to the heart of what customers really need to know: how well systems scale from small to large.
In the webcast we talk about all of the above topics, drill down into specific scalability benchmarks, and give Henry a chance to confess his benchmarking sins. Give it a listen…