This article is more than 1 year old

Nvidia analysis ‘shockingly ignorant’

Shader business

Letter Re: Why Nvidia cg won't work

I must call foul on the rather hasty and reactionary article you posted in response to the Nvidia Cg Shader launch. Andrew Richards predicts that Cg will fail because a C++ compiler for a PS2 chip is better. This is a very strange claim, considering that the PS2 processor at best supports geometry transformation and has access to the host memory on a platform that is essentially irrelevant to most graphics systems.

Would you predict the failure of Direct3D because it isn't on PS2? Or OpenGL because it merely has poor 3rd party support on PS2. Fundamentally the PS2 is a strange beast with quite primitive shading capabilities.

It's not so difficult to understand the Richards' response when it becomes increasingly clear that he doesn't draw any distinction between pixel 'fragment' shading and vertex shading, nor does he seem to understand the requirements for shading efficiently on a highly scalable stream processor rather than writing War and Peace on a more general purpose processor.

It's one reason for the existence of the shader compiler, because the divergent hardware (and interfaces) makes it increasingly difficult to design and implement consistent more sophisticated hand-coded shaders that look visually similar on different platforms or graphics cards.

The author claimed that divergent hardware will somehow defeat this shader approach. However, divergent hardware is one reason we need shader compilers.

The kind of approach being advocated by the author would clearly eliminate almost all graphics hardware as potential candidates, with his ridiculous assertions that pointer support being essential. (Some languages manage just fine without pointers).

This is just shockingly ignorant when it comes to hardware implementations and the nature of the problem in terms of memory access and again host driven streamed data from memory.

(The graphics implementation is fundamentally a scalable streaming one. There is not the general purpose capability to run arbitrary code of unlimited complexity and make any old fetch from system memory. There is a distinction to be drawn between the needs of generating and supplying application data, and rendering that data streamed to a graphics processor.)

In addition to this research papers have demonstrated how dependent texture fetches can be used in fragment shaders (pixel shaders) to address memory held in texture data if one insists on this type of access pattern.

It is important to understand that on CURRENT hardware - as in all PC graphics cards and even next generation PC cards - the Cg shader can use pixel operations as a low level instruction set. The key here is that the instruction set is a separate template with the Nvidia Cg software which defines how Cg compiles the shader. Schemes like pointers and languages of arbitrary complexity would be woefully unsupportable.

On the issue of floats vs ints I can use floats instead of ints in many cases, there are limited examples of this currently in OpenGL. Many things with internal integer representations are represented by floats, this infact abstracts the hardware to a degree. Things like integer pixel values aren't explicitly exposed if indeed you even have integer representations. I might lose my max int value, but we're not writing War & Peace in a vertex program. The 'integer' keyword has a placeholder in the spec, it's simply currently unsupported, probably because it's just not required yet, although maybe some day it will be. By the same token loops and branches are enabled by the target hardware profile, indicating a clear intent to support these developments when they become practical on interesting hardware.

The future of Cg is not assured, there are alternatives emerging on relevant platforms, and I don't number PS2 among the relevant platforms in this debate. Cg will most certainly not fail because of an obscure C++ compiler for a PS2 chip.

Finally, the assertion was that Cg will fail. Fail at what? World domination? Very probably! Fail to gain significant developer mindshare? Possibly, but it's wide open and although Nvidia is the 800 lb gorilla of graphics, they also have the most interesting and innovative hardware currently on the way.

So it must have a reasonable shot at this one. Half the battle is having the courage of your convictions and worse than doing nothing, is failing to deliver interesting capabilities on Nvidia's graphics platforms. With Microsoft's endorsement and serious tool vendors like Alias|Wavefront and Discreet getting involved - Cg can only help Nvidia.

There are many markets Nvidia might hope to chase with this technology, not just games but developers for whom Cg functionality has long been the holy grail. There is an entire industry out there that has managed for years with shader functionality that can easily be implemented within the Cg framework. Now that Nvidia is on the verge of delivering this capability in hardware I doubt those who have been begging for this will hesitate to use it for the reasons suggested. And in terms of hardware quality, Nvidia is shaping up to have the best of breed. We shall see.

Sincerely,
Corky Closure

More about

TIP US OFF

Send us news


Other stories you might like