Custom ICs in small numbers to be cheap as (normal) chips
DARPA boffins' amazing claim
The US military says it is on track to revolutionise the world of chip manufacturing by making it possible to produce advanced sub-65-nanometer ICs in small numbers - at the same low unit costs delivered by today's billion-dollar, mass production chip factories.
As most Reg readers will be aware, the standard method of putting a circuit pattern onto a chip today involves shining high-intensity ultraviolet light through a complicated mask onto a silicon wafer. The snag is that the production of the mask is a vastly expensive business, meaning that setting up to produce a new chip design is so costly that only huge production volumes can get the price per chip down to affordable levels.
“As feature sizes on integrated circuits have decreased to below 65 nanometers, the cost of these mask sets has become an overriding factor for small-lot fabrication of only a few wafers,” says DARPA boffin Joseph Mangano.
This is bad news for the US military, which would like very much to be able to deploy "application specific" chips, designed just for the job in hand. Unfortunately, in the nature of things, mostly these Application Specific Integrated Circuits (ASICs) would only be required in limited numbers - and thus the huge cost of producing lithographic masks would make them unacceptably expensive.
Thus it is that DARPA's ASICs programme is seeking to develop what it calls a "Nanowriter", which would dispense with masks and instead scribe the design onto the silicon using a beam of electrons.
Such "direct write lithography" is already well known: the problem with it is that it can take forever to draw millions or billions of individual components and connections onto a wafer using a single electron beam. But DARPA believe they've cracked this, by the simple expedient of equipping the Nanowriter rig with not one, not one thousand, but no less than a million parallel electron "beamlets", allowing it to deploy a million pen nibs at once in order to trace huge, fiddly IC layouts.
“By eliminating expensive mask sets, the Nanowriter tool will provide the cost benefits of large-scale IC manufacturing in quantities of one wafer," says Mangano.
According to DARPA, the Nanowriter will start out able to produce 45nm designs, and will scale down to 32nm in time. In a statement (pdf), the agency says that the project "recently achieved two important milestones" - specifically, the "micro-lens" array necessary to split one electron beam into a million has now been proved, and the issue of "pattern blur" suffered by first-gen "eBeam column" has been "significantly reduced".
The Pentagon war-boffins consider that Nanowriter technology will allow much wider use of ASIC custom chips, and make the production of "micro electromechanical systems" (MEMs, the postulated nanorobots of the future) much easier. ®
A long term prediction
Many years ago, software development was a painfully expensive business. Access to the machine was the major bottleneck and so anyone who actually had to program for a living learned how to "measure twice, cut once" with their untried code.
Then the hardware got so cheap that everyone could have their own box and run their programs in a debugger. The economics turned on its head and the smart approach became "cut twice and throw away the one that didn't fit". Modern bug-ridden software is the result. Above a certain level of reliability, it simply isn't cost-effective to find all the bugs before you ship to the first paying customers and sometimes it is never cost-effective to fix them, because you can make more money by adding new features and selling to a wider customer base.
The same will happen to hardware. It'll take a couple of decades, but it will happen.
Measure twice, cut once.
Actually I reckon it was speed rather than price that did for this, as it persisted long after the days of developer CPU time and storage volume allocations were gone.
Back in the day when compiling code was a submit-to-batch-and-then-read-the-paper-while-it-does-it exercise, repeated failed compiles had a significant effect on development time (and ones continued employment). Most of us would invest quite a bit of time in going over what we'd written, looking for errors before letting the compiler loose on it. As a result we'd not only drive out most if not all the compile errors*, but quite often spot a few other foibles and tidy those up while we were at it.
As soon as compiling became a "blink and you missed it" process, it became common to allow the compiler to find the cockups, fix the fatal ones and chuck the result into testing. Couple that approach with modern project management/planning methodologies (which all appear to end up as some variant on "cut the testing to hit the deadline" to me) and the outcome is a given.
I don't think that being able to turn out custom ASICs on the cheap is a worry. When the process that does so take less than 5 minutes we'll be screwed though....
*There was incentive here. A first time compile on anything of significant size and/or function meant everyone else stood you a beer.
What will happen is this technology will basically allow cheap ASICs to displace "microcontroller-and-software" based systems, for applications where an ASIC would be better (eg for high performance). As such, any new hardware bugs will replace software bugs; and for high performance real time kit software bugs are particularly devious and are a big problem.
Overall reliability will probably go up.
/ex hardware man