Fellow from AMD ridicules Cell as accelerator weakling
Opteron and GPU will conquer all
IBM's Cell chip will struggle to woo server customers looking to turbo charge certain applications because the part has a fundamental design flaw, according to AMD fellow and acceleration chief Chuck Moore.
Sure, sure. Cell is a multimedia throughput dynamo and its SPEs (Synergistic Processing Elements) are just lovely. "But something happened on the way to the ranch," Moore said, speaking this week to a group of Stanford students. "You have to get going first on the PowerPC chip (inside Cell), and the PowerPC core is too weak to act as the central controller."
Moore presented the Stanford students with a possible vision for the future of computing where general purpose processors will function as a type of gateway, handling older code on their own and then funneling new types of software off to specialized silicon. Not surprisingly, Moore sees AMD's Opteron processor as the perfect general purpose chip and the GPUs produced by the ATI clan - rather than Cell chips - as the preferred accelerators for the specialized jobs.
The plan of attack presented by Moore will sound familiar to those of you following current trends in software and hardware development. The rise of multi-core processors has forced coders to adopt parallel programming methods that spread software well across chips with numerous engines. In addition, researchers and companies on the cutting edge of high performance computing are looking at a variety of accelerators, including GPUs and FPGAs to speed up certain libraries and applications.
Like others, Moore argued that we'll soon run into a major software issue, as too few applications will be able to deal with many-cored chips. Things look okay with two, four and even eight core chips, but we're in real trouble after that.
Some of the main issues will arise with the operating system, which handles a lot of the scheduling jobs.
"If you think about it, the OS has a scheduler in it, and it schedules to multiple cores," Moore said. "So, the OS kind of has a serial component to it. . . At some point, the OS starts to get in the way, and the OS actually becomes the bottleneck."
Accelerators present problems as well, since they're a notorious programming pain for developers more acquainted with things like the x86 instruction set. The Cell chip from IBM, Toshiba and Sony receives a ton of grief for being programming beast - a fact also highlighted by Moore.
Plenty of people argue that GPUs are just as much of a pain, but Moore sees the graphics chip route as a realistic answer to dealing with tomorrow's software.
His "throughput machine" would include a number of Opteron chips up front to handle existing software and to crunch through single-threaded code. Then, you combine the Opterons with "a large number of small, power-efficient, domain optimized compute offload engines."
On top of all this, you need a better memory system and a better programming model that lives well above the operating system.
"The reason I am working on this right now is that I honestly do believe that new and emerging applications are defining and operating on much larger scale and more abstract data types.
"The way this would look is a traditional host would offload work to these dense compute accelerators. You would go through APIs, or libraries or domain specific libraries in some cases to avoid the heroic programming. You would use a concurrent runtime environment to ease some of the scheduling and resource management issues.
"And out of that what starts to happen - and this is an interesting result - is that today the industry is sort of locked on ISA compatibility. You are either x86 compatible or you are not. But I think this line of thought leads to API and platform level compatibility, which is a really nice result for the entire industry.
"Maybe it is not such a nice result for AMD because we happen to have a very successful franchise with x86. But I think this is just absolutely inevitable. I don't think we can fight it, so we are embracing it."
Overall, Moore argued that these heterogeneous machines with x86 and GPU processors will make more sense moving forward than the so-called many-cored chips that the likes of Sun and Intel are pursuing where software is spread across tens or even hundreds of similar cores. Of course, there are tons of software questions that need answers before we can fulfill Moore's vision.
You can catch Moore's speech here. ®
"As always, it depends on your application. For home computers most people want low latency, so you have a TV card with its own image-stream processor, sound card with its own processor and massive amounts of power on the gamers' (and vista-owners') graphics card. Physics cpus are also in view, offloading further tasks from the main cpu, all in the quest to get lower latency. I don't want my TV channel-surfing to degrade a voip call just because I've put three TV shows on screen at once. In this scenario it actually makes sense to have multiple (possibly lower power) specialised processors." - P Lee
So what you want is the silicon for the TV card to sit idle when you don't view tv on your PC? And you want your video card 3d hardware to sit idle while your just doing 2D on your desktop. And you want your audio silicon to sit idle when not playing music. And your Motherboard chipset that manages your hard drives, sitting idle while you are not using them.
INTEL's solution is to replace all of computational portion of that hardware with one or more general purpose CPU's that will perform the same function <when needed> and which will be available for general purpose computing when they aren't needed to support the hardware feature set.
Why wouldn't I want my systems general computing performance to improve spectacularly when I stop using 3d video on my video card?
Deep Vector Registers
"At the end of the day, a single execution thread already spends too much time waiting for memory and IO before getting real work done." - Rich Turner.
Well, to solve that problem, you have to keep the data where it is more reaedily available. In a CPU register or in the cache. However HLL's are specifically designed so that the concept of registers and caches are lost.
And of course, compilers continue to optimize like crap, typically producing code that is 2 to 4 times slower than properly optimized code, and for vectoriable operations 60 times slower, 200 times slower, if not more.
A 64K instruction set has more than enough room to be able to directly address 64K of internal registers - be they scalar or vector themselves.
Universe runs in parallel.
"That is not to say that parallel clusters are not useful. I think they are cool, but they're not the magic bullet some people are claiming them to be." - Lou Gosselin
Did your parallel computing brain come up with that all by itself?
The universe runs in parallel. But it's amazing what you can serialize when you try.