Feeds

Pre-AMD, ATI preps novel server charge

GPGPU for U and me

Boost IT visibility and business value

"These are streaming chips with a whole bunch of floating point units," Houston said. "You have to restructure code sometimes to get the best use out of these things. It's not for the faint of heart.

"In the high performance market, we've been talking about symmetric multi-processor servers with maybe four or eight or 16 threads. On an ATI chip, you're talking about 48 threads of simultaneous execution."*

ATI has only recently allowed developers to tap into its CTM (close to the metal) interface, which lets software interact directly with the underlying hardware.

Presumably, ATI will announce an even more open stance at its event next week.

As stated, the company declined to give us specific details on what it will reveal at the event. In late August, however, some savvy types discovered mention of ATI's FireStream 2U product by examining the server output log from an ATI Linux driver. Then, earlier this month, another chap discovered a living, breathing 1GB ATI FireStream card.

"The card is indeed based on R580 with a board layout nearly identical to the FireGL 7350 including the 1GB of ram," he wrote. "The box I saw contained only this card and a driver cd which had been burnt and was labeled as being a beta. Also, I found the label on the CD interesting: 'FireSTREAM Enterprise Stream Processor.'"

An ATI spokesman confirmed the existence of the FireStream product, but said its name may change due to the pending AMD merger and associated branding funk.

Nvidia did not immediately return our calls seeking comment for this story.

In an ideal world, the AMD/ATI tie-up will make life even easier on the GPGPU crowd.

"If I had my dream setup, it would include a much tighter interconnection between the graphics chip and central processor," Houston said. "What would be really interesting would be to have a cache coherent interface between the graphics processor and main processor."

AMD - via its Hypertransport technology - could potentially deliver just such technology by plugging GPUs directly into Opteron-based motherboards. This would let mainstream server makers such as Sun, IBM and HP follow the lead of a GraphStream and make graphics supercomputers.

Houston, and others, are also hoping that future GPGPU gear will allow for double precision floating-point operations as well, opening up the processor technology to a wider array of applications.

At the moment, ATI seems to be in an experimental phase with the GPGPU idea. Similarly, Nvidia hasn't rushed at the chance to talk up what it plans to offer.

The success of the technology will depend on the progression of software written for the GPUs and the sophistication of the GPGPU tools. In addition, the GPUs will need to stack up well against other options such as the Cell chip and FPGAs.

You can, however, imagine that with the raw power of GPUs and their volume status, customers should expect to see $2,000-ish boards make their way into workstations soon, followed by cheaper boards slotting into servers.

Without question, enterprise customers and labs are pleased to see GPGPUs moving out of the concept and testing phase and toward productville. A merged AMD/ATI might be in the best possible position to capitalize on these customers' interest. Hopefully, we'll know a lot more about ATI's ambitions next week. ®

*Bootnote

Houston was kind enough to add some technical detail to the differences between stream processing and multi-threaded processing for the curious.

In stream processing, you will run the same program on lots of elements simultaneously. Stream processing is a subtype of data parallel processing. The main goal of stream processing is to stage data so that it can be moved (streamed) through the memory system at high efficiency. All processing elements will run the exact same program, but on different data (parts of the stream). You cover memory latency with large amount of computation on each element ("arithmetic intensity"). In a stream model, all execution contexts (processors) run independent, so there is no locking or communication. Stream programming works well for large amounts of parallelism, but is limited in what applications it can run well. Often you have to convert an algorithm into a streaming formulation.

For multi-threading, each core can, and often do, run different programs. For example, one thread might be doing audio, while another does the AI for the bots in a game. Memory performance is generally gained by tuning your apps to make good use of the processor caches. General thread programming styles work well for a small number of threads, like 10s of threads. The user explicitly manages the processing but there is a large burden on the programmer to handle locking and communication control. Data-parallel and streaming models can be used well on multi-core processors as well, generally by carefully moving data through the cache hierarchy.

The essential guide to IT transformation

More from The Register

next story
Top Gun display for your CAR: Heads-up fighter pilot tech
Sadly Navdy kit doesn't include Sidewinder missile to blast traffic
FEAST YOUR EYES: Samsung's Galaxy Alpha has an 'entirely new appearance'
Wow, it looks like nothing else on the market, for sure
YES YES YES! Apple patents mousy, pressure-sensing iVibrator
Fanbois prepare to experience the great Cupertin-O
TV transport tech, part 1: From server to sofa at the touch of a button
You won't believe how much goes into today's telly tech
NVIDIA claims first 64-bit ARMv8 SoC for Androids
Mile-High 'Denver' Tegra K1 successor said to rival PC performance
XBOX One will learn to play media from USB and DLNA sources
Hang on? Aren't those file formats you hardly ever see outside torrents?
Giving your old Tesco Hudl to Auntie June? READ THIS FIRST
You can never wipe supermarket slab clean enough
Intel admits: Broadwell Core M chip looking a bit thin, no fans found at all
Chipzilla's 'cool' 14nm part to hit market this year
prev story

Whitepapers

Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
Solving today's distributed Big Data backup challenges
Enable IT efficiency and allow a firm to access and reuse corporate information for competitive advantage, ultimately changing business outcomes.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.