This article is more than 1 year old

Nvidia gets biological with life sciences nerds

Better shampoos through GPUs

Nvidia has a substantial lead over rivals Intel, Advanced Micro Devices, and IBM when it comes to peddling graphics co-processors, and it wants to keep that lead and extend it, if possible. That means doing boring old stuff that server and operating system makers have to do, such as lining up application software vendors so they can take full advantage of the Tesla family of GPU co-processors.

To that end, Nvidia has corralled a dozen popular life sciences applications vendors and made sure their code has been run through the CUDA programming environment and can leverage the substantial number-crunching power of Tesla co-processors.

According to Sumit Gupta, senior product manager of the Tesla line at Nvidia, there are more than 500,000 scientists worldwide who are using the computational methods employed in these applications. They are used to simulate molecular compounds and rudimentary organisms (or subsets of them), and they are mostly relying on beefed up x64 workstations and lots of time or lots of iron on a supercomputer but only a small slice of time to do their simulations.

With the Tesla Bio Workbench launched today, Nvidia and its software and hardware partners want life sciences researchers to get greedy and to crave more powerful workstations. These machines - a combination of x64 processors and GPU co-processors - mean that they won't have to share to run their simulations locally. They also want to add GPUs to supercomputer clusters to either simulate more complex molecules and organisms or run longer simulations than are possible on the workstations.

There is a direct relationship between the flops in a box and the complexity or duration of a simulation that box can run, and life sciences applications are no exception. Back in 1982, a top-of-the-line supercomputer with one gigaflops of performance could simulate the 3,000 atoms in the protein aprotinin, also known as bovine pancreatic trypsin inhibitor.

By 1997, a supercomputer with hundreds of gigaflops could simulate the 36,000 atoms in an estrogen receptor, and by 2003, a teraflops-class super could model the 327,000 atoms in the F1 portion of ATP synthase, which is cool because it is a molecular rotor powered by proton gradients inside the cell.

Huge progress has been made in recent years - the 2.7 million atoms in a ribosome being simulated in 2006, for example. The downer is that it took eight months on a massively parallel supercomputer with many hundreds of teraflops to simulate a mere 2 nanoseconds of the ribosome's behavior.

A petaflops supercomputer will be able to simulate the 50 million atoms in a chomatophore - pigment cells found in fish, lizards, amphibians, and other animals often used for camouflage. An exaflops super, by contrast - that's 1,000 petaflops, a performance level we might hit in two, three, or four years depending on who you ask - should be able to simulate a whole bacteria with billions of atoms.

But simulation time needs to increase and the time to run the simulation needs to decrease for these simulations to be useful. Gupta says that researchers need to be able to simulate somewhere between 1 to 100 microseconds, sometimes milliseconds, to do useful modelling of molecular interactions that might, for instance, show how a drug interacts with a cell.

Such speedups are going to require GPU co-processors, says Nvidia, and lots of them. And there are plenty of HPC researchers who are not sure that even this will be enough, as supercomputer designers face daunting power and cooling issues as they try to push up to the exaflops performance level.

The Tesla Bio Workbench is not just about complex molecular and bacterial simulations at the largest supercomputer centers, but doing practical things like discovering new drugs or designing a better shampoo or detergent more quickly than currently can be done.

More about

TIP US OFF

Send us news


Other stories you might like