This article is more than 1 year old

Zombie Moore's Law shows hardware is eating software

Customised CPUs are doing things software just can't do on commodity kit

After being pronounced dead this past February - in Nature, no less - Moore’s Law seems to be having a very weird afterlife. Within the space of the last thirty days we've seen:

  1. Intel announce some next-generation CPUs that aren’t very much faster than the last generation of CPUs;
  2. Intel delay, again, the release of some of its 10nm process CPUs; and
  3. Apple’s new A10 chip, powering iPhone 7, is as one of the fastest CPUs ever.

Intel hasn’t lost the plot. In fact, most of the problems in Moore’s Law have come from Intel’s slavish devotion to a single storyline: more transistors and smaller transistors are what everyone needs. That push toward ‘general purpose computing’ gave us thirty years of Wintel, but that no longer looks to be the main game. The CPU is all grown up.

Meanwhile, in the five years between iPhone 4S and iPhone 7, Apple has written its obsessive-compulsive desire for complete control into silicon. Every twelve months another A-series System-on-a-Chip makes its way into the Apple product line, and every time performance increases enormously.

You might think that’s to be expected - after all, those kinds of performance improvements are what Moore’s Law guarantees. But the bulk of the speed gains in the A-series (about a factor of twelve over the last five years) don’t come from making more, smaller transistors. Instead, they come from Apple’s focus on using only those transistors needed for their smartphones and tablets.

Although the new A10 hosts an ARM four-core big.LITTLE CPU, every aspect of Apple’s chip is highly tuned to both workload and iOS kernel-level task management. It’s getting hard to tell where Apple’s silicon ends and its software begins.

And that’s exactly the point.

The cheap and easy gains of the last fifty years of Moore’s Law gave birth to a global technology industry. The next little while - somewhere between twenty and fifty years out - will be dominated by a transition from software into hardware, a confusion of the two so complete it will literally become impossible to know where the boundary between the two lies.

Apple isn’t alone; NVIDIA has been driving its GPUs through the same semiconductor manufacturing process nodes that Intel pioneers, growing more, smaller transistors to draw pretty pictures on displays, while simultaneously adding custom bits to move some of the work previously done in software - such as rendering stereo pairs for virtual reality displays - into the hardware. A process that used to cost 2x the compute for every display frame now comes essentially for free.

Longtime watchers of the technology sector will note this migration from software into hardware has been a feature of computing for the last fifty years. But for all that time the cheap gains of ever-faster CPUs versus the hard work of designing and debugging silicon circuitry meant only the most important or time-critical tasks migrated into silicon.

Now that Moore’s Law has given up the ghost, we’re seeing a migration away from software and into hardware, wringing every last bit of capacity out of the transistor.

This transition is already well underway. Last month The Register revealed that Microsoft had designed a custom processor for its HoloLens virtual reality goggles. This surprisingly sophisticated 24-core DSP handles all of the data flowing in from the HoloLens’ many spatial sensors, taking a huge processing burden away from its rather wimpy Atom CPU - and does the job two hundred times faster.

It took a specialised team of silicon designers to create the Hololens DSP, because designing chips is hard work, fraught with trial and error and poor support tools. It’s an elite field requiring highly specialist skills.

Pretty much where software was thirty years ago.

Now that the drive into hardware is well and truly on, we can expect a new generation of tools - many backed by machine learning and artificial intelligence capacities - to make chip design significantly easier. Whether it ever becomes as easy as writing code is an open question - but between FPGAs today and nanoscale 3D printing tomorrow there’s every reason to suspect the ‘build’ phase in the late 2020s will be precisely that - building a chip.

It’s just this that makes the Mystorm project so very interesting. Sitting somewhere between the friendly hackability of Arduino and the deep power of the Raspberry Pi, Mystorm wants to make FPGA design accessible and cheap, because it’s designed to sit on the Raspberry Pi’s 40-pin connector, and they’re aiming for a stripped-bare retail cost of around $30 - same as Raspberry Pi.

But hardware is only half the battle here. If we want 11 year-olds designing custom hardware (and we very much do want that) we’ll need to give them the kinds of tools and support and endless YouTube videos they already have whenever they do an Arduino or Raspberry Pi project -- resources also useful for a generation of programmers in their 20s who will spend much of the rest of their careers cozying up to the silicon. Then Moore’s Law will live on, long after we’ve reached its physical constraints, as we explore the limits of creativity and imagination. ®

More about

TIP US OFF

Send us news


Other stories you might like