This article is more than 1 year old

NVIDIA blog bitchslaps Intel

Benchmark this

An email from my friendly NVIDIA rep called my attention to this recent blog post from Andy Keane, head GPU honcho at NVIDIA.

In the post, Keane thoroughly pounds Intel for presenting a paper titled Debunking the 100x GPU vs. CPU Myth which, in its abstract, asserts that an older NVIDIA GPU (the GTX280) is only 2.5x faster than Intel’s most current quad-core i7-960. Intel does a very scholarly job in the paper of laying out its benchmarks, methodology, and results. But it makes one wonder if it could have, well… cherry-picked the benchmarks in order to put the best face on it?

I’m sure it’s hard for any of us to imagine this being the case, but the question needs to at least be asked. For my part, I’ve talked to quite a few real-world folks who have seen 5x, 10x, 20x, and, yes, even 100x speed-ups when they run (and optimize) their code on GPUs or other accelerators.

This isn’t an NVIDIA-only statement; it also covers customers using IBM Cell processors and FPGAs. For highly parallelized code, accelerators run rings around CPUs – with the term ‘rings’ being defined as ‘greater than only 2.5x faster.’ So does that mean that accelerators are the ‘one true answer’ for HPC? Nope, there isn’t such a thing in this industry… but accelerators like GPUs and FPGAs can provide a huge benefit when used in the right situation – which is why they’re a hot product right now, and why Intel felt the need to slap them down.

I have to wonder if this paper would have been written if Larrabee had survived. ®

More about

TIP US OFF

Send us news


Other stories you might like