Feeds

GPGPUs and FPGAs are now fully implanted in our brains

The server booster bonanza takes hold

Build a business case: developing custom apps

CUDA (Compute Unified Device Architecture) is Nvidia's current answer to software issues. In total, the CUDA toolkit gives developers a C-compiler for GPUs, a runtime driver and FFT and BLAS libraries. With the fresh release of CUDA 1.1, Nvidia has added support for 64-bit Windows XP (broad Linux and 32-bit XP support was already there), while also declaring that it will include the CUDA driver with standard Nvidia display drivers.

Using the CUDA software, developers can tap into GPUs for speed-ups on a wide variety of applications, including those most near and dear to the high performance computing crowd's heart - stuff like Matlab or Monte-Carlo option pricing.

As Nvidia explains it,

Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called kernels that use many threads to operate on large quantities of data in parallel. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling high computational throughput across large amounts of data.

GPGPU, or "General-Purpose Computation on GPUs", has traditionally required the use of a graphics API such as OpenGL, which presents the wrong abstraction for general-purpose parallel computation. Therefore, traditional GPGPU applications are difficult to write, debug, and optimize. NVIDIA GPU Computing with CUDA enables direct implementation of parallel computations in the C language using an API designed for general-purpose computation.

CUDA received high praise at Supercomputing from Nvidia rivals and partners. Rival compliments usually serve as one of the surest signs that a given software package actually works as billed. Nvidia's Andy Keane, general manager of the GPU computing business, pointed us to several developers that ported their applications to an Nvidia GPU in a few hours. In many cases, these developers saw between 60 per cent and 150 per cent speed-ups with certain operations.

"We are not asking customers to take an entire application and run it on a GPU," Keane said. "We're looking for them to put suitable functions on a GPU. You're designing software to run on a GPU and designing it appropriately."

Some FPGA rivals to Nvidia knock GPUs for introducing performance and heat issues. DRC, who we'll get to later, claims that GPGPU performance will often top out at about a 10 per cent speed up on some applications because GPUs fail to handle loops well. "Any application with a dependency where it branches or loops back will have to exit the GPU and start over, which is where you get a huge performance penalty," DRC VP Clay Marr told us. In addition, GPUs tend to consume as much or a bit more power than standard CPUs, while FPGAs consume about 20 watts. Fill a cluster with GPU add-on cards, and you're talking about a hell of a lot of heat.

According to Keane, these performance claims are just plain untrue. Beyond that, it's FPGAs and not GPUs that are the real coding pain.

"With FPGAs, you are designing a chip," Keane said. "At the end of the design cycle, if something doesn't work, you have to go back to the start and redesign."

GPUs offer more flexibility from a coding standpoint and can keep up with customer changes, Keane said. A financial institution, for example, may make repeated tweaks to an algorithm and need to update its accelerator software for these alterations. This process happens at a much quicker pace with GPUs.

On the energy front, Nvidia thinks it tells a good enough story by matching x86 chips. You're seeing a major performance boost for certain operations while staying within the same power envelope. In addition, Nvidia can offer lower power GPUs if need be.

Nvidia's GPGPU story should improve next year when it matches ATI/AMD by rolling out double-precision hardware.

In the meantime, customers can check out Nvidia's various Tesla boards and systems. Those in search of the highest-end performance will want the four GPU Tesla S870 server, while developers might lean toward the two GPU D870 deskside supercomputer.

Just in case you're getting bored by our accelerator adventure, we'd like to offer you Verari's take on a booth babe as an intermission.

Shot of a lady with a "Nice Rack" t-shirt

Always subtle - Verari presents the nice rack girls

Accelepizza

Companies like Nvidia needs partners like Acceleware.

Based in Calgary, Acceleware slaps its unique breed of software on Nvidia's Tesla systems. At the moment, it specializes in speeding up software used for electromagnetic simulations. We're talking about code used to improve products ranging from cell phones and antennas to microwaves.

Proving the relative maturity of its software, Acceleware claims some very impressive customers - Nokia, Samsung, Philips, Hitachi, Boston Scientific and Motorola to name a few.

Acceleware tries to remove the complexity associated with developing software for a GPU by supplying its own libraries and APIs to partners and customers. The company then works hand-in-hand with clients to bring their existing code to GPUs - a process that takes "one developer about one month." The end result can be up to a 35x boost in performance.

Boston Scientific tapped Acceleware to figure out how pacemakers will interact with MRI machines. "Acceleware combines its proprietary solution with Schmid & Partner Engineering AG's (SPEAG) SEMCAD X simulation software and NVIDIA GPU computing technology, enabling engineers at Boston Scientific to supercharge their simulations by a factor of up to 25x compared to a CPU," we're told.

Bored by pacemakers? Well, General Mills turned to Acceleware for help figuring out how pizzas will behave in the microwave. Is there anything finer than modeling the interaction of processed cheese and radiation at high speed?

Looking ahead, Acceleware plans to add more software aids for oil and gas customers and to do more work in the medical imaging field.

Acceleware CTO Ryan Schneider insists that the work needed to port code to a GPU is not as daunting as it sounds.

"We're not saying, 'Here is a development kit. Everyone can do it.'" he told us. "We're going into a vertical and learning the algorithms and applications. It's a different approach from some of the other companies.

"That makes it sound like we're doing custom development work all the time, but there are core algorithms that everyone uses in these spaces. We take the work and get re-use."

Like ClearSpeed, Acceleware is part of HP's accelerator program. The company also has a relationship with Sun and plans to announce a partnership with Dell in the near future.

And now we head back to FPGA country.

Boost IT visibility and business value

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Scale data protection with your virtual environment
To scale at the rate of virtualization growth, data protection solutions need to adopt new capabilities and simplify current features.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?