Original URL: https://www.theregister.com/2008/11/20/many_cored_processors_and_software/

The madness of 'king cores

80-core servers will add-up to nothing without hypervisors

By Chris Mellor

Posted in Channel, 20th November 2008 07:02 GMT

Opinion Intel is pumping up its virility through proxies like Michael Dell reminding us of an 80-core chip future. It's impressive, but Intel is a company obsessed to distraction with Moore's Law. It's like watching a crack addict do anything to get the next hit, a doubling of processor performance every 18 months, whatever it takes, in Intel's case.

The software industry already has problems writing and compiling multi-threaded software so that the threads can be spread across the cores and execute in parallel. The more thread bandwidth there is in a chip the harder the job gets.

Intel's very recently announced Core i7, the seventh iteration of its Pentium technology using the Nehalem micro-architecture, has four cores each running two threads. Stick that in a 4-socket server and you have Hyper-V heaven: 16 CPUs and 32 threads and say 5 VMs/core giving us 80 VMs in one server.

Intel's 80-core teraflop wafer

To my mind, and admittedly I've got the brain of a 6-year-old here, that's okay if the hypervisor can push a a set of VMs onto each core and each VM's threads run on that core. The overall set of software threads in the server gets carved up by the software stack before we get to compiled app code threads having to be run across separate cores.

That's a killer problem. Hold that thought for a moment and consider this: my natty little MacBook has two cores already. Sony's PS3 has a Cell processor running 9 cores, one a controller core, the other 8 replicated graphics cores which do all the render work stuff, and very well too. That seems a classic multi-core, parallel processing app. Servers are now running quad-Xeons and the 6-core Dunnington has been announced with 8-core Xeons coming. Intel introduced us to Big Core Daddy Proto (wafer pictured) last year with Polaris, its 80-core teraflop chip demonstrator.

This isn't a proper many-cored chip as it is just - just! - 80 dual floating point engines and not X86 cores. But Intel's Paul Otellini and Michael Dell are talking confidently of 80 X86 cores on a chip. The intervening thing seems to be Larrabee, an Intel concept or design with 8 to 32 X86 cores that can do graphics processing jobs as well as standard X86 application work.

So we're looking at 8-core X86 chips in 2009 and then, running the quadrupling of processor chip performance every three years Moore's law equation, we're looking at 32 cores in 2012, 128 in 2015 and 512 in 2018 or so, ignoring any added performance boost from better core performance. So an 80-core X86 chip would come our way around 2014.

Back to the software problem. What applications are capable of executing across 80 cores in parallel, with, say, two threads per core, meaning 160 parallel threads? Hmm, ones that run on multi-processor supercomputers already - nuclear explosion simulations and genome sequencing and molecular modelling and ... not Oracle databases, nor the quarterly budget run, and definitely not the accounts receivable app.

Unless I'm missing something the vast bulk of existing applications, the stuff we want to run more quickly, are single or single-digit threaded applications. They need 80 cores like a schizophrenic needs more brains to get really, really confused with multiple personalities.

Intel can have any number of Threaded Building Block development efforts to add parallel programming to C++ applications as it likes, but they're not going to boost the speed of the weekly sales order processing run.

What the many-cored chips could do is to radically increase the application bandwidth of the data centre servers they run in. The cars on the data centre motorway, unlike the multi-media applications on PS3 and the like, won't drive any faster - instead you'll have many more lanes so that the overall number of vehicles on the motorway goes up.

That means, to me and my single-cored brain of a 6-year-old, that the two big bennies of a many-cored chip could be radically better multi-media interface experiences - games, immersive stuff - and radically better data centre application bandwidth.

The former needs software written and compiled for many-cored parallel processing. The latter needs hypervisors to take a software app load composed of dozens of single-digit threaded apps and run each of them in a separate VM on its own core. The hypervisor does the spreading of many single-digit threaded apps across the cores because, no matter how clever the compiler, it can't take a single-digit-threaded app and make it run across 80-cores. Compute that does not.

I'm seeing lots of glamorous guff about brilliantly photo-realistic, 3D hyper-fidelity sound, fantastically immersive interface experiences on Larabee and the like but zilch about hypervisors being the main interface for legacy apps to run on many-cored machines. Without that hypervisor ability data centres will get no boost from many-cored servers at all.

Maybe my core's not running too well but that's my logic. What's yours? ®