This article is more than 1 year old

Cool Fusion: AMD's plan to revolutionise multi-core computing

Different cores for different chores

AMD naturally approached potential partners with graphics chip development expertise - long before the ATI and AMD first began discussing the possibility of a merger, in late December 2005/early January 2006, according to Hester - and while the ultimate choice was ATI, we'd be very surprised if AMD didn't talk to a number of ATI's competitors. In the glow of the post-merger honeymoon period, erstwhile ATI staffers paint a rosy picture of the two firms' highly aligned goals.

Certainly, ATI had developed an interest in processor technology having seen how researchers were increasingly beginning to turn to programmable GPUs to process non-graphical data. As Bob Drebin, formerly of ATI desktop PC products group but now AMD's graphics products chief technology officer, puts it, science and engineering researchers and users had become attracted to modern GPU's hugely parallel architecture and Gigaflop performance.

amd fusion: cpu vs gpu performance acceleration

A GPU typically takes a heap of pixel data and runs a series of shader program on each of them to arrive at a final set of colour values. As display sizes have grown, magnified by anti-aliasing requirements, GPUs have developed to process more an more pixels this way simultaneously.

But since the result of any of their pixel shader runs is a set of binary digits, and digital data can be interpreted in whatever way the programmer believes is meaningful, modern GPUs are not limited to pixels. Try fluid dyamics data instead, replacing pixels with particle velocities, and using shader code to calculate the effect on a given particle of all the nearby particles. The result this time is a number you interpret as a new velocity value rather than a colour.

That's real physics, but as both ATI and Nvidia have been touting during the past six months or so, it equally applies to the movement of objects and substances in a game.

Crucially, though, this doesn't mean the CPU isn't needed any more, Drebin says. Going forward, it's a matter of matching a given task to the processing resource - GPU or CPU - that will be able to crunch the numbers most quickly. But if you're looking at apps with a much higher demand for GPUs than CPUs, ATI's thinking went, maybe we should be designing products that also provide the more general purpose processing that CPUs do so well.

It's not hard to imagine, incidentally, Nvidia thinking on similar lines, particularly given the rumours that it's exploring x86 development on its own. It's hard to believe it has some scheme to break into the mainstream CPU market - as AMD's Hester says, there are really only two companies that make x86 processors, but "four or five that have tried and failed" - but general-purpose processing units tied to powerful GPUs and aimed at apps that need that balance of functionality.

The beauty of AMD's modular approach is that it allows it to produce not only more CPU-oriented processors but also the GPU-centric devices ATI had been thinking of. And all of them are aligned architecturally and guaranteed a baseline compatibility thanks to their adherence of the x86 instruction set.

amd fusion: the concept

Which will itself have to adapt, Hester says, to take on new extensions that will allow coders to access directly the features of the GPU - or, indeed, other modules that are brought on board as and when it makes economic sense to do so. The big users will be the OpenGL and DirectX API development teams, but other software developers are going to want it to access these future x86 extensions for other, non-graphical applications.

More about

TIP US OFF

Send us news


Other stories you might like