More like this

Data Center



Quick Guide to GPU Computing

Parallel thinking

Cat 5 cable

A big topic in HPC computing for the last couple of years has been hybrid computing, where the general purpose processors (like Intel/AMD x86 CPUs or IBM/Sun/HP RISC CPUs) are combined with specialized processing units that are optimized for numerical work.

The vast majority of real work in this area revolves around using multi-core GPUs (like those found on humble and not-so-humble PC video cards) to take the number crunching chores off the plate of the standard system CPU. While a CPU today has up to 6 cores, GPUs can have hundreds of cores.

The results can be astounding: GPUs can thrash through some calculations 100x faster than even fastest general purpose processors. Moreover, the cost of GPUs is reasonable – even cheap – compared to the amount of work they can do. However, the problem is that GPUs need highly parallel code in order to deliver higher speeds, and apps need to be ported or rewritten to run on them.

The friendly folks at the National Center for Supercomputing Applications have written a brief, but highly informative guide to GPU computing. If you need the basics, this is a good place to start...

Biting the hand that feeds IT © 1998–2017