Nvidia to acquire ray tracing startup
Nvidia in the past has jeered Intel's heavy investments in ray tracing as a successor of rasterization for graphics rendering — but it's always stopped short of dismissing the technology completely.
That logically led many to assume Nvidia was developing its own ray tracing technology on the side. As it turns out, those bets were pretty well placed.
Nvidia will soon announce its acquisition of a ray tracing startup called RayScale. The firm is spun from the University of Utah, and will help Nvidia develop a mergence of ray tracing with traditional rasterization techniques.
"I don't believe in ray tracing versus rasterization," said Nvidia's CTO David Kirk during a reporters' preview of the acquisition yesterday. "I believe in ray tracing with rasterization."
According to Kirk, while ray tracing today is neither appropriate nor cost-effective for all graphics rendering, it beats rasterization for certain effects such as accurate reflections and indirect lighting. And it's gangbusters at making cars look shiny.
In fact, RayScale's proof-of-concept graphic at the pre-announcement was an intensely shiny CG car in front of an Nvidia jet.
However, Nvidia believes that because ray tracing is so resource-intensive, for the time being it's better utilized as a crutch for rasterization and other techniques rather than as a replacement.
"The people with the horsepower for ray tracing are making movies," said Kirk. "So you'd think if it was the best, they would be using ray tracing exclusively."
Kirk said movie studios today instead use a grab-bag of rendering techniques like rasterization, ray tracing, and radiosity.
It will still be some time before we start seeing games using ray tracing and rasterization, according to Kirk.
"We're not at the point where CPUs and GPUs can trace enough rays with ray tracing," said Kirk. "It will be the art of choosing carefully where to use the rays."
Further details on the acquisition — including the price tag aren't available quite yet. Look for an official announcement from Nvidia in the coming days or weeks. ®
The Amusement Machine - Real Time Raytracing...test it now.
Interesting bit of news indeed, please try http://theamusementmachine.net/ . We are showing real-time ray-tracing on NVIDIA GPU's combined with rasterization(in some videos combined). There is also a beta demo you can download and try if you register. Quite a bit different from the classic spheres and refelections. Enjoy.
NVIDIA Gelato is a renderer with raytracing support...
Gelato, is a production quality non-interactive "final-frame" render. (In fact, I wouldn't be surprised if Gelato uses mental ray from their mental images acquisition.)
RayScale on the other hand appears to be a different beast in that it aims for interactive (i.e., real-time) raytracing.
My guess about NVIDIA's acquisition of RayScale is that they are hoping to benefit from the RayScale technology/knowledge to improve future GPUs with real-time raytracing support.
Only tracing every fifth pixel is a good optimisation -- but you can vastly improve the picture quality if you follow by another step: If the four pixels in a square all hit the same object, interpolate the texture and intensity, but if they don't, follow all the "missing" rays for the intervening pixels. I did that with a raytracer I made in '86 (except I followed every fourth pixel instead of every fifth), and I found that (with relatively simple scenes), I got immense speed-ups and only a little degradation in picture quality. In an animated sequence, I doubt you would notice the difference.
Making cars look shiny.
If Nvidia can come up with a way of making *my* car look shiny without my having to polish it, they have a sale!
I cast about (ha! ha!) playing with a hybrid ray tracing / rasterization engine back in the early '90s. It was really just concept fleshed out with some x-mode C code, but the idea was to, like some people have said, use it to replace a z-buffer plus a little bit more. Mine would also have decided which polygons to render in the first place. Since my quality standards weren't too high (see time of development) I was looking at having it 'miss' far-off polygons intentionally, which would result in a kind of blurry background. So, I didn't have to trace a ray for each pixel - I'd do every five or so, which meant only a couple of hundred rays for a 320x240 screen.
The nice bit was that it handled a lot of neat effects without much of a speed hit - translucent windows and such.
If I'd kept going I think it could have been pretty cool, but I was really young at the time and didn't have the knowledge to exploit it. And now the standards are much higher, so I still don't have the knowledge to exploit it even though I have more knowledge! So it goes...