This article is more than 1 year old

Academics float NVidia pixel plans

"CPU through you we can"

A collection of small furry animals catches our attention, pointing their paws at this very interesting research paper. Have a look, says one.

It's a project rather unpromisingly entitled "Ray Tracing on Programmable Graphics Hardware" and it's emerged from Stanford's Graphics Lab, with more than a little help from NVidia.

The paper explains "how viewing a programmable graphics processor as a general parallel computation device can help us leverage the graphics processor performance curve and apply it to more general parallel computations, specifically ray tracing.

Obviously hoping no one is awake, they elaborate:-

"We have shown that ray casting can be done efficiently in graphics hardware. We hope to encourage graphics hardware to evolve toward a more general programmable stream architecture."

By now only a few parallel-processing freaks will be awake, they hope. But that's enough to ensnare us. It's getting interesting.

Parallelism is increasing faster on the graphics card than on the CPU, they declare. Before issuing a rallying cry for freedom: give graphics cards a full instruction set!

Bending our ear to the ground, our furry friend whispers:-

"What is downright strange about the paper is that they divide their work into two completely separate pieces of work on the same concept, one using hardware that has never never been built yet, and another using even better hardware that's, well, never been built either. It's one thing to speculate on future trends in the industry but who are they kidding?

"The paper lays out an imaginary graphics roadmap for the next one or two years and casually states that the authors see these things as a natural extrapolation from current hardware capabilities.

"What they really have in this paper is the inside skinny on what NVidia is currently working on, and they map it out in some detail.

Indeed the final credit does thank "Matt Papakipos from NVIDIA [sic] for his thoughts on next generation graphics hardware."

Although ATI, Sun, SGI and Sony are also thanked as sponsors, it's the tattooed encyclo-piddia [Reg. rhyming slang - ed.] that seems to be the inspiration for this.

The paper points to some specifics:-

- Programmable floating point pixels, and floating point texture.

- Programmability for pixels that's equivalent to current vertex programmability

- Lots of texture lookups and dependent texture reads with few limits

- Storing more than one color to the framebuffer, and not any piddly little color each one is 4 floats

The paper then goes on to lament the lack of branching and looping in the instruction set on this 'imaginary' hardware and makes a convincing case for this capability. People want it, they argue. And they point out that branching would fundamentally alter the parallelism of graphics architectures.

So instead of big chips driving modest graphics cards, could we have fairly modest chips tendering vast arrays of graphics processing units?

"You're talking shite again, Andrew," mutters our furry friend.

"This is more like say, SSE instruction support through the graphics card in terms of its implications for impacting CPU development. It could be big for some applications but not for general computing, and it will never be as pervasive as x86 extensions unless NVIDIA and ATI cooperate."

"But all kinds of things are applications are possible with this architecture that weren't before," he adds, before scampering off in a flurry of brown fur.

At the Merrill Lynch Hardware Heaven financial analyst conference, Nvidia's CEO described NVid's next generation chip - scheduled to ship in late summer, as "the most important contribution we've made to the graphics industry since the founding of this company."

We're beginning to see why. ®

Related Stories

Intel sleepwalks through NVidia press release
3DLabs claims breakthrough graphics chip

More about

TIP US OFF

Send us news


Other stories you might like