Sun scientist Joy on the future that doesn't need us

Veteran thinker considers the risks of future technology

Bill Joy, the developer of the vi editor and Berkeley Unix, and later chief scientist of Sun Microsystems and co-designer of Sparc, picoJava and MAJC, has written a landmark essay in the current issue of Wired. It's entitled: "Why the future doesn't need us" and it's scary stuff - well worth reading. Sensationalists, or perhaps Sun's business rivals, have pounced on one extract from his essay - a quotation from Theodore Kaczynski, the Unabomber - in which the human control of machines is discussed, although Joy's thesis deals with the risks of future technology. His main concern is with destructive self-replication as a result of advances in genetics, nanotechnology and robotics (GNR), which he thinks pose greater dangers than weapons of mass destruction in the 20th century, for the simple reason that GNR technologies do not require massive research facilities. He is particularly exercised by the possibility of robots going out of human control. Joy is optimistic that Moore's law will continue for another 30 years, and that a fundamental technology shift will be to molecular electronics this decade: indeed, he notes that this is now beginning to be practical. The subsequent move in the next 20 years, Joy suggests, will be to molecular-level "assemblers" that could enable very low-cost solar power, as well as cures for cancer and the common cold by the augmentation of the human immune system, and ultra-cheap, very powerful pocket computers. Some of the mechanisms and evil intentions that concern him exist already: there would be no virus software industry without malicious intent, and now that we have it, there is every reason to believe that the virus threat will become more dangerous - although it has to be said that for most users it is minimal, and that sellers of virus software thrive on scares. A particular fear that Joy discusses is the ease with which nanotechnology could be used destructively, rather than constructively, for example in geographically localised situations, or on genetically-distinct populations. At worst, the biosphere could even be destroyed by what nanotechnologists call grey goo, as a result of self-replication, which is a key goal of genetic engineering. Already we can see ethically questionable uses of genetic engineering - geared towards economic success rather than evolutionary success - but as Joy points out, the US Drugs Administration has approved 50 genetically engineered crops for unlimited release, and we now have more than half the world's soya beans and a third of the world's corn having genes from other species. Joy also expresses concern about the evil use of GNR technology, and begins to examine what might be needed to enforce the relinquishment of certain technologies. His suggestion of a verification regime like that for biological weapons does not seem to be very practical, but he makes an interesting suggestion when he points to the need for control through intellectual property rights. Joy thinks it may be necessary to provide new forms of protection for intellectual property. Joy declares himself as a supporter of the thesis that more reliable software would make the world a safer and better place, and few would disagree with this, or his conclusion: "Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided." It is right that Joy should be drawing attention to such dangers and opportunities, but we confess to considerable cynicism as to whether a robotic world is possible in our lifetimes, if at all. Discoveries like self-replication in peptides still appear to be a quantum leap from the replication of machines. The relative failure of artificial intelligence is an important object lesson here, and could presage an ultimately provable principle, like Heisenberg's, that machine intelligence cannot exceed that of the inventor of the machine. ®

Sponsored: Designing and building an open ITOA architecture