Big Blue bigwig: Tiny processor knobs can't shrink forever
You cannae break the laws of physics - and 7nm is the limit
HPC blog While at IBM’s Smarter Computing Summit last week, I had the great pleasure of hearing Big Blue's Bernie Meyerson talk about limits to today’s tech, and the associated implications.
Bernie is IBM’s VP of Innovation and one of the rare technologist-scientist types who can clearly and directly explain highly technical concepts in a way that they can be understood by a reasonably intelligent grey squirrel (and me too).
Even better, he’s highly entertaining and doesn’t hedge when it comes to stating what’s what in the world. Back in 2003 he predicted that Intel would never deliver on its promises of 4 to 5GHz CPUs and would, in fact, be forced to shift to multi-core processors.
Meyerson backed up his brash prediction (it was plenty brash back then) by sharing electron microscope images of individual atoms that showed they’re kind of lumpy. The problem with lumpy atoms is that when you use only a handful of them to build gates, they leak current like a sieve. When asked about this, Intel denied over and over that there was a problem – right up to the point when it announced it was scrapping its entire product strategy in favour of a multi-core approach.
So when Meyerson talks, I pay attention. And Meyerson is talking again.
In his presentation at the Pinehurst golf resort in North Carolina, he was again playing on the theme that we can’t shrink our way to higher performance any more. In fact, when it comes to chips, we have only a generation or two left before we reach the end of the line.
So where’s the end of the line? According to Bernie: 7 to 9 nanometers. When the features on a chip get to this minute size, you start to see quantum mechanics effects that are “very nasty” that impairs the performance of the processor's decision-making gates.
The problems at 7nm are profound to the point where there isn’t really any way around them – it’s just too damned small – and there isn’t a way to scale down an atom. It’s a fundamental limit, and it’s finally in sight. Chips in mass production these days have a 32nm or 22nm feature size, and 14nm is not far down the line.
Unfortunately, I can’t toss around the correct scientific terms to pretend I know what I’m talking about here. I have only my own deplorable notes for reference; plus Meyerson’s time slot forced him to move pretty quickly through his material. But he was probably talking about quantum tunneling, a phenomenon where particles (such as electrons) travel through barriers, like those in very thin semiconductor gates, that they should not cross. This results in lots of current leaking from these tiny switches, relatively speaking, which ramps the device's power consumption.
Meyerson also talked about the limitations facing us on the storage side. Like most great stories (and many great ideas too), it starts in a bar. In this case, it was Bernie in a bar with a bunch of other smart guys, probably knocking back drinks that aren’t accessorised with little umbrellas. Like all barroom conversations, the topic eventually turned to magnetic storage density. More specifically: how many atoms would you need to reliably store a single bit of data?
This prompted some non-barroom research and scientific activity. The resulting answer? Twelve. It takes twelve atoms to reliably store a bit of data. Any less and you lose stability, meaning that parts of the data might disappear, or morph into something you didn’t store. This is related to the same quantum effects discussed above and are ultimately the result of the fact that we can’t scale atoms down to a handier size.
From what Meyerson said, it sounds like we have a bit more room before we start to run up against the limit on storage density. If my notes are correct, we won’t approach the 12-atom limit until we get around 100 times more dense. Right now, a 1TB per platter is the highest density available. Theoretically, we may be able to get to 100TB per platter and 300TB per drive at maximum density.
So how long do we have until we hit the limit? It depends on how fast density grows. Historically, we’ve seen density grow anywhere between 20 per cent and 100 per cent per year. Lately (last decade or so), growth has ranged between 20 per cent and 40 per cent annually, meaning that we might hit the twelve atom limit in as few as 13 years or as long as 25 years.
That’s an eternity in the tech business – maybe even long enough for someone to figure out how to shrink atoms down to a more convenient size. ®
Is this such a bad thing?
I have, in front of me, an 8-core, 8Gb, 1Tb laptop with stupendous graphics ability. It was the cheapest that fit my criteria (which focused on things like having a numpad, having enough USB ports, etc.) And what am I doing with it? I'm browsing the web, sending email, and some mundane network admin tasks etc. Where's all my processor power actually being used most? Games. Outside of that, I'm just drawing pretty boxes in (apparently) extremely inefficient ways. I'm using 3Gb of memory with hardly anything running and although some of that is file cache, that's something that will be unnecessary soon if SSD's make their final leap to affordability.
With the limit on processor speed, people started to take advantage of multi-core. With a limit on that, people jumped onto GPU assistance. With a limit on the power that a certain size of a device can do overall, hopefully we'll go back to some good old fashioned efficient code. Like not requiring 3Gb, having dozens of "services", and lots of "frameworks" to draw a couple of 2D apps on the 2D screen (and I don't even have flashy stuff like Aero etc. enabled!).
I program myself, and I actually feel intimidated by the sheer amount of power available to me when I need it. And, yes, I get lazy and think "Ah, it'll be fine on a modern machine" but I think we'll have to go back to some decent programming again.
Of course, what will happen is instruction sets will grow (apparently the AES instructions in my processor allow me to do 2Gbit/s of encryption compared to 200Mb/s in software), chips will increase in size, cooling will take precedence, and we'll end up with huge monstrosities that still take 30 seconds to load whatever-version-of-Word-is-around.
It's both hilarious and sad that first-boot startup times, and program-first-run times haven't changed (or on average people's PC's has significantly lengthened) since the DOS days. Hell, I can emulate Windows 3.1 booting quicker than I can boot Windows 7 - and although they do a lot more now, there's not a lot there that actually ends up as end-user-visible changes.
Re: Meyerson predicted Intel's move away from speed to cores?
...because of the amount of cooling required to achieve that speed.
And the reason you need that phenomenal level of cooling is to deal with heat from the current leakage of a processor running at a honking overvoltage and ludicrous clocks. Or "exactly what he said" in other words.
Being one of the geezers that grew up in the Commodore/Atari age I vividly remember programmers using sheer prowess to squeeze the impossible out of very limited hardware on a daily basis. Video Game consoles are the last resort of this practice. Everyone else just stopped botherhing. Efficiency does not sell new hardware.
"Sorry, we were too lazy for efficient code. Just buy a new device, will you? It will come with all sorts of sustainability PR to make you feel good about it."