Nvidia CEO says 'no' to VIA acquisition
Graphics company doesn't need in-house CPU tech - for now...
Nvidia doesn't want to buy VIA, the graphics chip maker's CEO has claimed. Nvidia is completely focused on being a "visual computing technology company", he said. Well, for the moment, at any rate...
Speaking to CNet, Nvidia CEO Jen-Hsun Huang suggested neither Nvidia nor VIA are interested in acquiring each other's business.
That the two might be considering such an option has been doing the rumour-mill rounds of late, primarily because pundits feel Nvidia desperately needs to get into the processor business if it's to remain relevant.
AMD is busily attempting to integrate graphics processing cores into future 'Fusion' CPUs. Intel is taking a similar approach, having said its upcoming 'Nehalem' processor family will include models that incorporate a graphics engine.
Both moves are fuelled by the shift of functionality off ancillary northbridge chips and onto the processor. First it was the memory controller, and now we're looking at integrated graphics coming on board too.
If Nvidia isn't to lose out, it has to follow suit, the argument runs. And since buying a CPU maker is easier than kick-starting development on its own, the pundits look around to see who Nvidia could buy and settle upon VIA.
Huang's point is that there's plenty yet to Nvidia to do in the graphics arena before it needs to worry about the longevity of its core business. Fusion and Nehalem are a threat to Nvidia's chipset sales, not its discrete GPU lines. Yes, Intel has its eye on that too, with its 'Larrabee' chip, but it's very hard to imagine Nvidia not working on a ray-tracing chip of its own, if only as an insurance policy.
And it's easy to look at Nvidia and think only of PC graphics and chipsets, the two areas where the company is most vulnerable from the Fusion and Nehalem initiatives, but its efforts extend far beyond that into industrial and mobile graphics.
There, Huang said, Nvidia works successfully with other chip makers to allow CPUs and GPUs to co-operate efficiently. The upshot: Nvidia doesn't need an in-house CPU design and production team.
Since VIA's share of the x86 processor market is way smaller than AMD's, let alone Intel's, it's questionable how much real value Nvidia would get out of it. VIA has a decent CPU business, but it's hard to see it mounting a serious challenge to its rivals.
Combining Nvidia GPU technology and VIA CPU architectures isn't going to change that, leaving such a plan as little more than a survival strategy. Nvidia's not yet in a position where it desperately needs one of those.
Now what would be awesome...
... would be a proper nVidia 9000 series 3D graphics accellerator on Via's upcoming Mobile ITX system!
Get to it, miniaturisation boffins!
companies like valve are starting to produce multi-threading toolkits in the same way as id produced FPS engines, so the difficulty of the software part to handle multiple cores will decrease. The other thing is that the software doesn't have to be explicitly parallelised so long as the OS can marshal different processes onto different cores. Individual tasks don't necessarily get much faster, but you could browse the web (if you don't think that's processor intensive, try a flash-heavy site on a Mac), encode video, maybe do some folding@home and whatever else the cool kids are doing, all at once with no slowdown.
Jen-Hsun Huang has a point......
Jen-Hsun Huang may well have a point. Why are AMD and Intel so interested in adding parallel processing to their inherently serial chips?
I think we may well have hit the point where CPU's will become commodity items within the next few years. GPU's/RtPU's on the other hand have a very long way to go before they become a commodity item.
If your a CPU vendor it's a scary time. How do you sell a CPU that goes 'twice as fast' as the last one if that speed isn't needed (or easily utilized)?
More CPU cores won't make things better it will make things worse. It's damn difficult to write good parallel code unless the problem lends itself well to parallelization (3d gfx, physics and video encoding are the only ones that spring to mind in consumer apps).
If each new CPU has a shiny GPU bolted on and it's updated on a regular basis Intel & AMD can just keep going in the way they are used to.
I am not touching Via with a long stick ( think in temrs of light-years long) So why would you want to combine that with a good product like nVidia.
VIA is notorious for having incompatibilities .They yet have to implement a correctly working, certified (meaning passing ALL mandatory tests) PCI bus. I have video processing boards (as in videosignal as opposed to graphic : these boards process DV or HDV video in realtime) that absolutely refuse to run on anything VIA (heck the machine can't even boot when you install such a card. It crashes during the memory test.
Why : These boards use the complete bus mastering capabilities of PCI and PCI express. VIA is to cheap to implement these advanced features ( because they would have to pay licencing fees). Same boards work perfectly on intel and nVidia chipsets.
Re: CPU GPU
Hardware-wise, the actual fabrication is done by TSMC, so that's irrelevant.
Software-wise, true, you could probably run your OS on an NVidia or ATI core nowadays, but that's not what it's designed for. It would be like rendering your 3D stuff in software on an Intel or AMD CPU. You could do it, but you'd be better off with the NVidia part.