iSuppli: Moore's Law to take a breather
Exponential growth may have had its chips
Is Moore's Law, the driving force behind the technology and economics of the chip business, going to take a holiday? The analysts at iSuppli think so. And sooner than you think, and maybe not for the reasons you are thinking.
The old trick of cranking up clock speeds on shrinking chips to boost performance has been dead for several years now, which was the first phase of the application of Moore's Law, which was followed by integrating more and more system components onto chips. When power consumption and heat dissipation issues put the kibosh on cranking clocks, chip makers started slapping multiple cores onto chips to boost performance, and have been stuck, more or less, at the same clock speeds for the various architectures.
The clock ceiling is near 1.5 GHz for Sparc T and Itanium processors at the moment, and around 3 GHz for x64 processors. It is 3 GHz for Power5 chips and is around 5 GHz for Power6 processors, which has a different pipeline from Power5. But adding cores has its own issues, as IT industry giant David Patterson of the University of California at Berkeley explained last fall at the SC08 supercomputing trade show, and analysts at Gartner talked about earlier this year.
Basically, just because chip makers can keep adding cores, it doesn't mean that the application software and the end user workloads that run on this iron will be able to take advantage of these cores (and their varied counts of processor threads) because of the difficulty of parallelising software.
iSuppli is not talking about these problems, at least not today. But what the analysts at the chip watcher are pondering is the cost of each successive chip-making technology and the desire of chip makers not to go broke just to prove Moore's Law right.
"The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20 nanometers (nm), to 18nm nodes," explains Len Jelinek, director and chief analyst for semiconductor manufacturing at iSuppli in a new report.
"At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it," he adds.
At that point, says Jelinek, Moore's Law becomes academic, and chip makers are going to extend the time they keep their process technologies in the field so they can recoup their substantial investments in process research and semiconductor manufacturing equipment. Look at these pretty money curves going all flat:
As you can see from the chart, iSuppli reckons that the ramp up to 90 nanometer (0.090 micron) technology by chip makers was steep, and then fell off just as quickly. (This is not chip count or aggregate transistor count produced, but the revenue derived from chips using each process.) And while the ramp to 65 nanometer processes was just as steep, it is not going to peak the same way and iSuppli reckons that 65 nanometer processes will be used to make chips at a more or less steady state heading out to 2012.
Look at the 45 nanometer ramp. It is not as steep as the 90 nanometer or 65 nanometer ramps, and rather than peaking, it heads up very slowly in a straight-line fashion to just rise above the 65 nanometer peak.
It took about a year for 65 nanometer to hit its peak (from the breakeven point where it was generating no revenue in early 2007), but 45 nanometer processes, which started making money in early 2008, are going to take until late 2011 or early 2012 to hit the same peak as 65 nanometer hit. Four years instead of one.
"The semiconductor industry will be living with historical generations of technology longer than it did before," Jelinek says. "You are not seeing these geometries rise and fall off the way they did before. Rather, they are living on."
And they are living on because chip makers are going to be forced by the high cost of each generation of chip technology to maximize for money generated by a process instead of chip performance and lowering the cost of chips. "Historically, the focus in the semiconductor industry was always how quickly you could move to the next geometry node. Now the question is how to make money by sustaining a specific node."
One way that iSuppli reckons that chip makers will extend the life of future chip processes is to go three dimensional in chip designs. Instead of trying to shrink geometries, chip makers will take the system-on-chip and multicore architectures to a new extreme.
But this still won't solve the software parallelism problem that is looming large. While this is not directly the chip makers' problem, if that problem doesn't get solved, it won't matter how much stuff chip makers can cram into a chip. No one buys a computer that doesn't offer more performance or more features, and if the software can't give an end user a better experience, they don't buy the computer and the chip is a dud, no matter how cool it might be technologically. ®
Couple of observations:
- On the "lazy developer" issue, for sure developers are less careful to eke performance out of systems than before. I actually started on a ZX81 many years ago and wrote a graphical data analysis app that fit in 16k (and evaporated every time I nudged the RAM pack.) At that time a lot of time was spent on micro-optimizations just because hardware resources were so limited. Now there is no need to tune to that level. Faster hardware is not just for faster performance, it also allows developers to work more quickly by using higher level abstractions and spending less effort on tuning. If the applications do not need ultimate performance, this is a Good Thing. Hardware is cheap, developers are not.
If/when Moore's Law does grind to a halt then you may see the balance shift again and more resources are spent on tuning.
- On the article itself, I think it is a very insightful analysis and I'm disappointed that some here dismiss it with a wave of the hand and a "they'll think of something" comment. Clearly the current course is not sustainable, and it is not at all clear that something will come along just in time to save the day. Although it is possible, I prefer not to live by faith, and it is worth seriously considering the possibility that performance improvements will fall off dramatically in the future. That will lead to a very significant realignment of priorities in the industry, and may actually be a good thing.
The next logical step
Sorry Lionel Baden I disagree, the article is right...
You ask to take a 2 year old PC and run the latest game on it, well I have my Q6600 with a 9800 gx2 and it plays everything, top spec, even Crysis at 1920x1080 on my TV!
I have been saying this for ages, for years and years I didn't go 6 months without needing to upgrade something, be it GPU, RAM or HDDs, I have now been 22 months and the only time I have opened my case was to add a 1Tb Drive.
The HDD issue is the next step, in the same system I have the WD Raptor 10k Drives and for gaming they have been the best you can get for a realistic price and that is only starting to change now as SSD gets more affordable which it is just about doing though the price per GB is still astronomically higher in most cases home users would only ever want their OS and a few App or Games installed on it.
Networking and internet needs to change sooner in my eyes, our goverment say they are going to garrentee 2Mb to bring us to the front of the technology world however they fail to realise Korea has had 100Mb for years!
When I have SSD and Gigabyte WAN Connection I will be ALMOST happy, but then I am a tech junkie like that!
re: Don't blame the programmers
You are kidding aren't you?
Yes, the current multi core architecture has limits, but programming skills (and to be fair, development tools) are nowhere near touching the capabilities of multi-proc/multi-core/multi-thread, the non uniform memory architecture (NUMA) that you mention exists and has been in use for years, which is why correctly written software like Oracle run so well on things like E25k/Superdome/P595 kit (yes, I/O is critical but how you handle the I/O is also).
Does anyone remember when the dual proc version of Doom came out? it was soooo much smoother than single core with faster clock speeds, because a lot of effort went in to use that second proc, developers (I don't think they have earnt the title of programmer) don't think about this, and if something is slow they cry "faster procs", "more memory", "better I/O" not "how can I optimise this?"
Developers are lazy IMNSHO it's all bells and whistles.
Is this just a rant? well, think about it, does anybody remember the ZX81? how about the fact that you could play chess in 1k of memory (yes, 1K, 1024 bytes, and that included memory for the display), you even had highres graphics of a sort (if you had 16k), so given that it took years to get to the limit of a ZX81 (a very simple Z80 chip, and a tiny amount of memory) think about our current kit, no, the reason why the programs runs slowly is that they are written badly, not because we are hitting any hardware limits.
Maybe there should be a Moores law for developers, every 12 months they will need twice as much power to perform the same task? Moor(on)es law?
A great read and an interesting one
An interesting story, which explains a lot as to why the P4 has been around for so long, before even after multiple cores appeared, and why speed increases have all but halted. An interesting read and a great story :)