Feeds

Nvidia flexes Tesla muscles

Opens kimono, talks strategy

Secure remote control for conventional and virtual desktops

2010 was the breakout year for Nvidia’s Tesla division, according to Tesla VP Andy Keane, who spoke at the company’s Industry Analyst Day earlier this month. I think it’s pretty obvious that he’s right, and a quick review of the last year tells the story.

Three of the top five systems on the Top500 list sport Nvidia GPU accelerators. At SC10, Tesla GPUs were everywhere. They showed up in almost every hardware vendor booth, and most of the ISVs were either boasting about a CUDA-enabled piece of their application or discussing their future plans for it.

In his talk, Keane pulled back the covers further than I’ve seen before in any semi-public forum. He shared strategy and tactics, and even broke out some pretty impressive numbers for us. In Nvidia’s fiscal 2009 (which is mostly our calendar year 2008), Tesla revenue was about $10m for the year. It more than doubled to $25m in FY10 and quadrupled in FY11 (just completed) to top $100 million.

That’s pretty good growth, particularly when you factor in the poor economy and associated pullback in most tech spending. For 2012, Nvidia expects to see sales volume to double to $200m.

How big, you say?

One problem for Nvidia is estimating just how big its market actually is. Right now, every GeForce and Quadro product it sell can run CUDA, and there are a lot of these cards sitting in workstations and PCs – at least 200 million these days.

With total CUDA downloads totaling over 700,000 at the end of 2010, Nvidia figures it has somewhere around 100,000 developers working with the code. Many of these developers are doing this work using GeForce or Quadro cards that aren’t captured in the Tesla revenue numbers cited above. So why the big ramp-up in revenue and market acceptance?

The obvious answer is because GPUs can run rings around traditional CPUs on highly parallel numerical processing workloads. But to me, the real answer is because Nvidia put in the time and effort necessary to build up the ecosystem surrounding Tesla. It correctly recognized that no one was going to develop CUDA-enabled apps if it had to roll their own tools. So putting together a development environment and tooling was job one, closely followed by convincing ISVs to Tesla-ize its wares.

The Nvidia strategy was to pick the leading apps in each segment and prove the case that GPUs could radically improve performance. Sometimes this involved working directly with the ISV; other times it came about by working with researchers who would then publish their findings, making sure to cite the role GPUs played in the process. Some examples of these killer apps include Amber in molecular dynamics, Ansys for engineering simulation, Autodesk’s 3ds Max animation and rendering and the venerable Matlab for mathy stuff.

This way, OEMs

At the same time, Nvidia greatly broadened its OEM strategy. In the early days, the company sold its own Tesla workstations to seed the market with systems. Beginning in 2008 or so, it started selling with SuperMicro. By 2010, its OEM list included every tier 1 vendor (Dell, HP, IBM) along with all of the specialized players such as Cray, SGI, Bull, T-Platforms and Appro. This puts Nvidia into everyone’s sales catalogs and system configurators, which is a big step.

Tesla isn’t a bleeding-edge choice anymore – at least not in HPC. It’s still newish to many customers, but the technology is now a mainstream, fully-supported alternative to traditional CPU-only system designs.

To me, the sky is the limit for GPUs. As enterprises increasingly implement predictive analytics, I foresee a need for speedy devices that can crank through huge numerical operations at low cost. Many of these workloads are a very good fit for GPUs, and the ability to purchase GPU capacity in small, inexpensive increments will speed adoption in corporate data centers.

Right now, with Intel on the accelerator sidelines and AMD still working to bring out their entries, the field is clear for Nvidia, and it’s making the most of it.

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
Turnbull should spare us all airline-magazine-grade cloud hype
Box-hugger is not a dirty word, Minister. Box-huggers make the cloud WORK
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
Microsoft adds video offering to Office 365. Oh NOES, you'll need Adobe Flash
Lovely presentations... but not on your Flash-hating mobe
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Driving business with continuous operational intelligence
Introducing an innovative approach offered by ExtraHop for producing continuous operational intelligence.