This article is more than 1 year old

Nvidia boss: cloud, ¡Si! Intel, ¡No!

The times they are a-changin' — big time

GTC Nvidia chief exec Jen-Hsun Huang sees the computer industry on the cusp of radical changes. And with his company now about 65 per cent devoted to parallel computing, you can easily guess which technology he believes will drive that transformation — and which company he believes will lose.

"Whatever capability you think you have today, it's nothing compared to what you're going to have in a couple years," he told the assembled multitude at a "Fireside Chat" Thursday morning at the GPU Technology Conference in San José, California.

That capability will be provided by parallel computing — or, as he told his keynote audience on Tuesday, "parallel computing, GPU computing, accelerated computing, heterogeneous computing — however you guys want to describe it."

Key to the transformation, in Huang's view, will be supercomputers up in the cloud, responding to queries from personal clients.

Although "cloud" is the buzzword du jour, in Huang's view you ain't seen nothing yet. "If you look at [users'] current experience of a cloud, they think it's pretty cool — but that's just an application that would run very naturally on a PC, done in the cloud," he said.

That's all well and good, but it's only the beginning. "We need to take that one step further, which is something you can't do at all on a PC is now possible in your PC or in your tablet or in your cellphone, because it's done in the cloud."

Giving all users access to HPC-class compute power won't be possible until the price-per-flop of those beasts drops — but Huang had an answer at the ready when asked what the price of a supercomputer might be in five or ten years. "One answer is: same. It'll just be 100 times faster." That is, he explained, two orders of magnitude faster than a Tesla-based HPC machine of today.

But in a dig at traditional CPU-based HPC boxes, he added: "Relative to a cluster of CPUs, call it somewhere between a thousand, to 20, to 40 thousand times faster."

The other answer to "how much will it cost?" — cheaper but more-powerful machines — launched Huang into a flight of fancy concerning personal supercomputers: "Whatever roomful of supercomputers you're currently counting on, in 10 years' time you'll put it on a head-mounted display and take it into the field with you, because it's inside a Tegra," he fantasized.

He even had a productization idea. "I can't wait to see what a binocular is going to look like in 10 years. The combination between the kind of sensors we'll have and the type of computational capability we'll have, and its connection to the cloud — a binocular in 10 years is going to be some freaky device."

Huang's vision — no pun intended — is not merely that of supercomputers on your head, but of devices seamlessly intercommunicating, with HPC cloud clout being made available to mere mortals. He did, however, temper his enthusiasm a bit by admitting that "the computational resource is nearly infinite, but the bandwidth hasn't really changed."

When asked if the distinction between HPC boxes, high-performance workstations, and end-user devices will even matter in his interconnected future, Huang answered: "There's no question that it won't matter anymore."

Elaborating, he said: "The type of computing that you would otherwise need to do locally [will move] onto the cloud. I think the separation between a mobile device, a tablet, a PC, and a supercomputer will be nonexistant, because they will obviously be connected."

Such a topology is the undiscovered country. "The really magical and incomprehensible experiences that consumers will have as a result of a supercomputer literally in their hands," Huang mused, "I don't think most of them understand that yet."

And from Huang's point of view, parallel computing is just about ready to provide those "incomprehensible experiences", as the software-development world is coming over to GPUville. The first couple of years of GPU computing, he said, was about getting people used to the fact that there could be a new type of computing architecture that could make much of general purpose computation "twenty times faster."

Today, he says, "They now understand the technology at a deep enough level that they go: 'You know what? I believe it. But not only that, but it's perfectly obvious to me. I understand it, and now I get it. And I don't understand why everything isn't that way now.'

"And so now were in that phase," he said. "Everybody's now asking the question: 'Why isn't that parallel? Why isn't that parallel? Why isn't that parallel?' It's no different than us walking up to a car and saying 'Why isn't that a hybrid? Why isn't that a hybrid?'"

Nvidia's co-founder is not lacking in confidence — at least in front of an assembly of customers, developers, analysts, and the press.

More about

TIP US OFF

Send us news


Other stories you might like