Feeds

Nvidia flexes Tesla muscles

Opens kimono, talks strategy

SANS - Survey on application security programs

2010 was the breakout year for Nvidia’s Tesla division, according to Tesla VP Andy Keane, who spoke at the company’s Industry Analyst Day earlier this month. I think it’s pretty obvious that he’s right, and a quick review of the last year tells the story.

Three of the top five systems on the Top500 list sport Nvidia GPU accelerators. At SC10, Tesla GPUs were everywhere. They showed up in almost every hardware vendor booth, and most of the ISVs were either boasting about a CUDA-enabled piece of their application or discussing their future plans for it.

In his talk, Keane pulled back the covers further than I’ve seen before in any semi-public forum. He shared strategy and tactics, and even broke out some pretty impressive numbers for us. In Nvidia’s fiscal 2009 (which is mostly our calendar year 2008), Tesla revenue was about $10m for the year. It more than doubled to $25m in FY10 and quadrupled in FY11 (just completed) to top $100 million.

That’s pretty good growth, particularly when you factor in the poor economy and associated pullback in most tech spending. For 2012, Nvidia expects to see sales volume to double to $200m.

How big, you say?

One problem for Nvidia is estimating just how big its market actually is. Right now, every GeForce and Quadro product it sell can run CUDA, and there are a lot of these cards sitting in workstations and PCs – at least 200 million these days.

With total CUDA downloads totaling over 700,000 at the end of 2010, Nvidia figures it has somewhere around 100,000 developers working with the code. Many of these developers are doing this work using GeForce or Quadro cards that aren’t captured in the Tesla revenue numbers cited above. So why the big ramp-up in revenue and market acceptance?

The obvious answer is because GPUs can run rings around traditional CPUs on highly parallel numerical processing workloads. But to me, the real answer is because Nvidia put in the time and effort necessary to build up the ecosystem surrounding Tesla. It correctly recognized that no one was going to develop CUDA-enabled apps if it had to roll their own tools. So putting together a development environment and tooling was job one, closely followed by convincing ISVs to Tesla-ize its wares.

The Nvidia strategy was to pick the leading apps in each segment and prove the case that GPUs could radically improve performance. Sometimes this involved working directly with the ISV; other times it came about by working with researchers who would then publish their findings, making sure to cite the role GPUs played in the process. Some examples of these killer apps include Amber in molecular dynamics, Ansys for engineering simulation, Autodesk’s 3ds Max animation and rendering and the venerable Matlab for mathy stuff.

This way, OEMs

At the same time, Nvidia greatly broadened its OEM strategy. In the early days, the company sold its own Tesla workstations to seed the market with systems. Beginning in 2008 or so, it started selling with SuperMicro. By 2010, its OEM list included every tier 1 vendor (Dell, HP, IBM) along with all of the specialized players such as Cray, SGI, Bull, T-Platforms and Appro. This puts Nvidia into everyone’s sales catalogs and system configurators, which is a big step.

Tesla isn’t a bleeding-edge choice anymore – at least not in HPC. It’s still newish to many customers, but the technology is now a mainstream, fully-supported alternative to traditional CPU-only system designs.

To me, the sky is the limit for GPUs. As enterprises increasingly implement predictive analytics, I foresee a need for speedy devices that can crank through huge numerical operations at low cost. Many of these workloads are a very good fit for GPUs, and the ability to purchase GPU capacity in small, inexpensive increments will speed adoption in corporate data centers.

Right now, with Intel on the accelerator sidelines and AMD still working to bring out their entries, the field is clear for Nvidia, and it’s making the most of it.

3 Big data security analytics techniques

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Kingston DataTraveler MicroDuo: Turn your phone into a 72GB beast
USB-usiness in the front, micro-USB party in the back
IBM rides nightmarish hardware landscape on OpenPOWER Consortium raft
Google mulls 'third-generation of warehouse-scale computing' on Big Blue's open chips
It's GOOD to get RAIN on your upgrade parade: Crucial M550 1TB SSD
Performance tweaks and power savings – what's not to like?
AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014
But they won't appear in SeaMicro Fabric Compute Systems anytime soon
prev story

Whitepapers

Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Mainstay ROI - Does application security pay?
In this whitepaper learn how you and your enterprise might benefit from better software security.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.