Feeds

Nvidia flexes Tesla muscles

Opens kimono, talks strategy

Protecting against web application threats using SSL

2010 was the breakout year for Nvidia’s Tesla division, according to Tesla VP Andy Keane, who spoke at the company’s Industry Analyst Day earlier this month. I think it’s pretty obvious that he’s right, and a quick review of the last year tells the story.

Three of the top five systems on the Top500 list sport Nvidia GPU accelerators. At SC10, Tesla GPUs were everywhere. They showed up in almost every hardware vendor booth, and most of the ISVs were either boasting about a CUDA-enabled piece of their application or discussing their future plans for it.

In his talk, Keane pulled back the covers further than I’ve seen before in any semi-public forum. He shared strategy and tactics, and even broke out some pretty impressive numbers for us. In Nvidia’s fiscal 2009 (which is mostly our calendar year 2008), Tesla revenue was about $10m for the year. It more than doubled to $25m in FY10 and quadrupled in FY11 (just completed) to top $100 million.

That’s pretty good growth, particularly when you factor in the poor economy and associated pullback in most tech spending. For 2012, Nvidia expects to see sales volume to double to $200m.

How big, you say?

One problem for Nvidia is estimating just how big its market actually is. Right now, every GeForce and Quadro product it sell can run CUDA, and there are a lot of these cards sitting in workstations and PCs – at least 200 million these days.

With total CUDA downloads totaling over 700,000 at the end of 2010, Nvidia figures it has somewhere around 100,000 developers working with the code. Many of these developers are doing this work using GeForce or Quadro cards that aren’t captured in the Tesla revenue numbers cited above. So why the big ramp-up in revenue and market acceptance?

The obvious answer is because GPUs can run rings around traditional CPUs on highly parallel numerical processing workloads. But to me, the real answer is because Nvidia put in the time and effort necessary to build up the ecosystem surrounding Tesla. It correctly recognized that no one was going to develop CUDA-enabled apps if it had to roll their own tools. So putting together a development environment and tooling was job one, closely followed by convincing ISVs to Tesla-ize its wares.

The Nvidia strategy was to pick the leading apps in each segment and prove the case that GPUs could radically improve performance. Sometimes this involved working directly with the ISV; other times it came about by working with researchers who would then publish their findings, making sure to cite the role GPUs played in the process. Some examples of these killer apps include Amber in molecular dynamics, Ansys for engineering simulation, Autodesk’s 3ds Max animation and rendering and the venerable Matlab for mathy stuff.

This way, OEMs

At the same time, Nvidia greatly broadened its OEM strategy. In the early days, the company sold its own Tesla workstations to seed the market with systems. Beginning in 2008 or so, it started selling with SuperMicro. By 2010, its OEM list included every tier 1 vendor (Dell, HP, IBM) along with all of the specialized players such as Cray, SGI, Bull, T-Platforms and Appro. This puts Nvidia into everyone’s sales catalogs and system configurators, which is a big step.

Tesla isn’t a bleeding-edge choice anymore – at least not in HPC. It’s still newish to many customers, but the technology is now a mainstream, fully-supported alternative to traditional CPU-only system designs.

To me, the sky is the limit for GPUs. As enterprises increasingly implement predictive analytics, I foresee a need for speedy devices that can crank through huge numerical operations at low cost. Many of these workloads are a very good fit for GPUs, and the ability to purchase GPU capacity in small, inexpensive increments will speed adoption in corporate data centers.

Right now, with Intel on the accelerator sidelines and AMD still working to bring out their entries, the field is clear for Nvidia, and it’s making the most of it.

Choosing a cloud hosting partner with confidence

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
Seagate's triple-headed Cerberus could SAVE the DISK WORLD
... and possibly bring us even more HAMR time. Yay!
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.