Mything the point: The AI renaissance is simply expensive hardware and PR thrown at an old idea

There is no ghost in the machine

Comment For the last few years the media has been awash with hyperbole about artificial intelligence (AI) and machine learning technologies. It could be said that never, in the field of computer science, have so many ridiculous things been said by so many people in possession of so little relevant expertise. For anyone engaged in cutting-edge hardware in the 1980s, this is puzzling.

In this month's issue of The Atlantic, the high-profile intellectual and best-selling author of Sapiens and Homo Deus, Yuval Harari, describes the impact that AI will have on democracy. What is perhaps most interesting about this article is Dr Harari's extraordinary faith in the capabilities of current AI technologies. He describes Google-stablemate DeepMind's chess software as being "creative", "imaginative", and even in possession of "genius instincts".

Meanwhile, the BBC's The Joy of AI documentary finds Professor Jim Al-Khalili and DeepMind founder Demis Hassabis describing how an apparently artificially intelligent system has "made a genuine discovery", "can actually come up with a new idea", and has developed "strategies that it has intuited by itself".

With such a torrent of exaggerations and anthropomorphisms being used to describe what are, essentially, dumb and mechanistic systems, now could be a good time for some kind of back-to-basics hardware reality check.

Discussions about computer technologies tend to be conducted via myths, metaphors, and human interpretations of what is presented to us via the computer screen. Metaphors such as "intuition", "creativity", and novel "strategies" are part of an emerging mythology. AI pundits identify patterns in its game moves and call them "strategies", but the neural-network has no idea what a "strategy" is. If there really is any "creativity" here, it is the creativity of DeepMind researchers who devise and manage the processes that train the systems.

Today's AI systems are trained through a massive amount of automated trial and error; at each stage a technique called backpropagation is used to feed back errors and tweak the system in order to reduce errors in future – thereby gradually improving the AI's performance on a particular task – such as chess.

The current surge in effectiveness of AI, "machine learning" and so-called "deep learning" systems is largely based on the application of this backpropagation technique – first invented in the 1960s, and applied to neural networks by Geoffrey Hinton in the mid-1980s. In other words, there has not been any significant conceptual progress in AI for more than 30 years. Most of what we are currently seeing in AI research and the media is what happens when an awful lot of very expensive computing hardware, and a sophisticated PR campaign, is thrown at a fairly old idea.

This is not to say that DeepMind work is not valuable. A machine assisting with the generation of new strategies and ideas is very interesting – particularly if the operation of that machine is difficult for a human to fathom because of its vast complexity. In our largely secular culture, the magic and mystique of technology is seductive, and the appearance of some mystery in the largely dry rational field of engineering is very welcome. But there is no ghost in the Google-stablemate's machine.

Hardware vs software, analogue vs digital, Thompson vs Hassabis

New Scientist, November 1997

All of the fuss around DeepMind machines reminds me of the excitement generated two decades ago by a very different and arguably more profound "machine learning" system.

In November 1997, the work (PDF) of Adrian Thompson – a researcher at Sussex University's Centre for Computational Neuroscience and Robotics – made the front cover of New Scientist, in a piece headed: "Creatures From Primordial Silicon – Let Darwinism loose in an electronics lab and just watch what it creates. A lean, mean machine that nobody understands."

Thompson's work caused a minor sensation because he had defied convention and evolved his machine learning system in electronic hardware – rather than using a conventional software approach. He had chosen to do this because he realised that the capabilities of all digital computer software is constrained by the binary on/off nature of the switches that make up the processing brain of every digital computer.

By contrast, neurons in the human brain have evolved to make use of all sorts of subtle and almost unfathomably complex physical and biochemical processes. Thompson hypothesised that evolution of computer hardware through an automated process of natural selection might exploit all of the analogue (ie, infinitely variable) real-world physical properties of the silicon medium out of which a computer's simple digital switches are built – maybe resulting in something reminiscent of the efficient analogue operation of human brain components. And he was right.

What Thompson did in his lab was to evolve a configuration of an FPGA (a type of digital silicon chip in which the connections between its digital switches can be constantly reconfigured) so that it would discriminate between two different audio tones. When Thompson then looked inside the FPGA chip to see how the connections between the switches had been configured by the evolutionary process, he noted an impressively efficient circuit design – using a mere 37 components.

Not only that, the evolved circuit no longer made sense to the digital engineers. Some of the 37 components were not electrically connected to the others, but when they were removed from the design, the whole system stopped working. The only viable explanation for this strange situation was that the system was making use of some kind of mysterious electromagnetic connection between its supposedly digital components. In other words, the evolutionary process had recruited the real-world analogue characteristics of the system's components and materials in order to perform the "computation".

This blew my mind. Being a young researcher in the 1990s, and having a background in both electronic hardware research and AI, I found Thompson's work awe-inspiring. A computer had not only managed to invent a completely new species of electronic circuitry, and transcend the abilities human electronic engineers, but, more importantly, it seemed to point the way towards developing infinitely more powerful computer systems and AI.

Black & White screenshot

Hassabis started out as lead AI programmer for now-defunct Lionhead Studio's god game, Black & White

So what happened? Why is Thompson relatively obscure, while Hassabis has Google-parent Alphabet filling his boots with cash and the BBC making eulogising documentaries? A lot of the answer comes down to timing. Back in the 1990s, AI was about as fashionable as John Major's underpants. Today AI carries the burden of ushering in a "Fourth Industrial Revolution". Capital chases the Next Big Thing. While DeepMind digital AI systems are not very useful for modelling complex real-world analogue systems such as the weather or human brains, they certainly are well suited to crunching the digital data that flows from the simplistic online binary world of links, clicks, likes, shares, playlists, and pixels.

DeepMind has also benefited from an understanding of the power of showmanship. DeepMind has marketed its technology and senior personnel by cultivating technical mystique, but its demos have all been about playing games with simple computable rules. Games also have the advantage of being highly relatable and visually interesting for the media and general public. In reality, most commercial applications of this technology will be fairly banal backroom business applications, such as optimising power efficiency in the data centres where Google keeps its computers.

Ceci n'est pas une paddle

What Thompson and Hassabis certainly have in common – apart from Britishness – is the skill and creativity required to train and evolve their systems effectively, but this dependence on human skill and creativity is obviously a weakness for any "artificial intelligence" or machine learning system. Their respective technologies are also very brittle. For example, Thompson's systems often stopped working at temperatures different to those at which they were evolved. Meanwhile, over at DeepMind, merely changing the size of the paddle in one of DeepMind's video game systems completely destroys the AI's performance. This fragility is due to the fact that DeepMind's AI software does not know what a paddle – or even a video game – actually is; its switches can only deal in binary numbers.

Machine learning systems have certainly made great strides recently, but that progress has primarily been won by throwing huge quantities of conventional computing hardware at the problem, not by radical innovation. At some point in the near future, it will no longer be possible to cram any more tiny silicon switches onto a silicon chip. Design efficiency (ie, doing more processing with less hardware) will then become commercially important, and this could be the moment when evolvable forms of hardware finally come into vogue.

Hybrid systems combining the approaches of both Thompson and Hassabis may appear too. But whatever happens, Dr Harari will have to wait a while before he thinks about purchasing a "creative" AI system to write his next best-selling book. ®

Andrew Fentem has worked in human-computer interaction research and hardware development for over 30 years. He pioneered multitouch surface technologies before Apple entered the field.


In 2016, Apple released a version of the iPhone that was reported to contain a mysterious FPGA (a chip similar to the one that Thompson used in his evolutionary hardware experiments). Nobody seemed to know what the strange chip was for. For a moment I dared to wonder whether Apple had developed some kind of exotic evolvable AI hardware. But unfortunately they hadn't. Or haven't yet.

Biting the hand that feeds IT © 1998–2018