This article is more than 1 year old

Nvidia and HPC's second act

Sitting pretty - but for how long?

In a lot of ways, Nvidia is the belle of the GPU/accelerator ball these days. (Make your reservations early for the upcoming "GPU Fancy Dress Cotillion" later on this year; tuxedo t-shirts encouraged.) Intel withdrew Larrabee, IBM isn't pushing Cell, FPGAs aren't gaining a lot of traction yet, and AMD is late to the party with Fusion.

This leaves Nvidia in a position where it is the only major vendor offering accelerator gear that has enough of a developed ecosystem to make it reasonably easy for HPC types to take advantage of it. But this isn't going to last forever.

We're starting to see the end of the first act in the accelerator stage play. By that, I mean that the value proposition for accelerators is well-established in low-hanging-fruit markets like HPC. This isn't to say that everyone who is going to use accelerators has bought them – far from it. There is still a long way to go in terms of market and sales growth.

But the value of accelerators as a great way to crank up performance at a reasonable cost is now part of conventional wisdom in HPC. There is also enough ecosystem out there to enable HPC-type customers to accelerate the hell out of almost any workload.

So if that's the first act, what's the second? To me, it is when HPC-esque workloads (enterprise analytics, for example) move into the mainstream business computing market. This has happened in banking and on Wall Street, and also for some large companies in other industries (Wal-Mart is an often-cited example).

This is a much larger market opportunity than HPC, and this greater market is the pot of gold at the end of the rainbow for Nvidia, AMD, and other accelerator vendors. However, enterprise customers are not like HPC customers. Corporate types won't roll their own code to take advantage of GPUs, FPGAs, or much of anything else – no matter how great the performance advantage. They'll look to their ISVs to provide apps that can use accelerators out of the box – and even automatically discover accessible accelerator hardware, and automatically route appropriate compute tasks to the accelerators.

ISVs will want to provide this capability in order to gain advantage over their competitors. However, they aren't going to do it until the picture becomes a bit clearer. They will need to see customer demand (we're not quite there yet), and they’ll need to see which is the 'right' way to go.

Nvidia, with its heavy push forward on CUDA development, is doing its damnedest to answer that question. AMD is pushing OpenCL as the answer – and given the openness of it (hell, it even has the word 'open' in its name), it’s in some ways an even better answer to the “Which accelerator do we write to?” question.

But AMD doesn't have the gear out there now – or at least doesn't have the kind of fully-productized stuff that Nvidia offers. But, for the ‘second act’ opportunity, it's not fatally far behind. Assuming it executes according to its current roadmaps, we should see a nicely competitive horserace between Nvidia's Tesla and AMD's Fusion.

More about

TIP US OFF

Send us news


Other stories you might like