Original URL: https://www.theregister.com/2006/09/25/bogus_ai/

The Emperor's New AI

'It looks like you're trying to have a conversation with a computer. Can I help?'

By Andrew Orlowski

Posted in Science, 25th September 2006 21:14 GMT

In the early 1970s, no science show was complete without predictions of HAL-like intelligent autonomous computers by the turn of the century.

The Japanese, fearing their industrial base would collapse without a response to this omniscient technology, poured hundreds of millions of dollars into their own AI project, called Fifth Generation. They may as well have buried the money in the Pacific Ocean. Two decades later there are no intelligent robots, and "intelligent" computers are a pipe-dream.

(It was an academic coup for MIT's Professor Marvin Minsky, a fixture on the AI slots. Minsky's own preferred, linguistics-based approach to AI, symbolic AI, triumphed in the grants lotteries over an approach which preferred to investigate and mimic the neural functions of the brain. Minsky's non-stop publicity campaign helped ensure his AI lab at MIT was well-rewarded while neural networks starved.)

For the past week reports have again confidently predicted intelligent computers are just around the corner. Rollo Carpenter, whose chatbot Joan won an annual AI prize for creating software that most resembles a human, predicts that computers will pass the 'Turing Test' by 2016. In this test, computer software fools a human interrogator by passing off as a human.

(You can spot the flaw already: to sound human isn't a sign of intelligence. And what a pity it is that Turing is remembered more for his muddle-headed metaphysics than for his landmark work in building computational machines. It's a bit like lauding Einstein for opposition to the theory of plate tectonics, rather than his work on relativity, or remembering Newton for his alchemy, not his theory of gravity).

But let's have a look. A moment's glance at the conversation of Joan, or George, is enough to show us there is no intelligence here.

Here's Joan, cited at the MIT's Technology Review - a Wired style glossy:

Judge: So what you're saying is that you are not just human but THE smartest human?

Joan: Yes, why, what did you think I was saying?

Judge: So what is two plus two?

Joan: Four.

Judge: Is it possible for two plus two to not equal four?

Joan: I don't know. Do you think it's possible?

Judge: How many degrees in a triangle?

Joan: 180 degrees.

Judge: Is it possible for more than 180 degrees to be in a triangle?

Joan: Probably not. You can change your opinion, but not your constitution.

Judge: It's not a question of opinion, it's a matter of physics and maths.

Joan: A five-ounce bird could not carry a one-pound coconut.

The first thing you'll notice is that if this is a human, it's a pretty shallow yoof with a zero attention span. Now that's a surprise. It's said that pets eventually resemble their owners. Do AI bots resemble their programmers?

Joan is simply a database of feints and shimmies deployed to change the subject and confuse the questioner. Occasionally Joan will ask the interrogator if they are a robot, or chide them for being stupid. These are all pre-programmed rhetorical tricks. They may bore or bamboozle an interrogator, but this is no indication of intelligence.

That's no surprise, when we learn Joan really is a database of conversational snippets - five million lines of them. This is the same technique deployed by Eliza, Joseph Weizenbaum's elementary software parser written in the 1960s, and bundled with EMACS.

Weizenbaum was horrified by the fascinated reaction to Eliza, which was devised as a tongue-in-cheek endeavor, and a subsequent epiphany led him to devote much of the rest of his life to urging scientists to cultivate a sense of social responsibility.

Look! A talking dummy.

The pop media's fascination with "intelligent" computers, especially of the talking variety, shouldn't surprise us. It only mirrors our own anthropomorphic tendencies - to give things far more human characteristics than they really have. Whether it's voices coming out of the static, or faces in wardrobes or cheese toasties. The inanimate Golem brought to life, through either human or divine intervention (or both) is a myth that's taken many forms over the years.

As a result, AI has attracted far more than its fair share of flakes, phoneys and the outright naive over the years.

The cost is hard to calculate. There's an obvious resource issue, an opportunity cost, when fatuous endeavors are allowed to crowd out more pressing computing problems. Of all the woes we have with today's computer systems, their inability to hold a conversation must be one of the least important. We'd rather see systems that don't fail, that never lose data, and photographs that we know we'll be able to see in thirty years' time. Today's digital data is designed to be lost, it seems; imagine a generation with no family album, because the ink has bled and the formats can't be read. It isn't science fiction so much as a probable outcome.

And even if an "intelligent" computer was to be devised, it would help us a lot less than we might imagine. The world isn't short of intelligence. It's just very rarely applied to pressing problems.

So this kind of AI work is about as useful to us as research into how we can burn through our carbon fuels faster.

Fortunately, serious researchers may yet be able to shake off the curse of AI. In Manchester, where Turing made his flawed philosophical assumption that set academic AI haring down the wrong path for forty years.

At the University of Manchester Steve Furber, father of the ARM chip, is helping build a "brain box", modelled from biological systems. The research will help design more fault tolerant computing systems. There isn't a hint of talking (or dancing) robots in sight.

So to get better computers, maybe all we needed to forget about them being conscious, or intelligent. ®