You Look Like a Thing and I Love You: A quirky investigation into why AI does not always work

Flaws are 'far beyond merely inconvenient', writes Janelle Shane

Book review Everyday AI has the approximate intelligence of an earthworm, according to Janelle Shane, a research scientist at the University of Colorado but better known as an AI blogger.

Since AI is both complicated and massively hyped, and therefore widely misunderstood, her new book is a useful corrective.

You Look Like a Thing and I love You is both funny and annoying. It is based partly on content from the author's AI Weirdness blog, where she recounts what happens when you use artificial intelligence for unusual purposes. Examples include creating chat-up lines (one result being the title of the book), writing recipes, telling jokes, sorting tasty sandwiches from disgusting ones, or creating robots for crowd control.

What these examples drive home is that AI has no real understanding of what it is doing. Shane could get the AI to come up with some quite convincing-looking recipes, for example, from a model trained from thousands of real ones. If you look more closely though, you see that all the recipes are nonsense, because the AI has no idea what makes for a tasty dish, or even how to cook. It is just generating text from patterns it has learned.

You Look like a Thing and I love you by Janelle Shane

Shane uses these examples and others drawn from the history of AI to explain some basics about how it learns and how it comes up with its predictions or imitations. AI is different from algorithmic programming in that it learns by example. Want image recognition? Supply a large database of images categorised according to what you want the AI to recognise, train the model, and the AI will work out its own rules for categorising new images.

confused

AI image recognition systems can be tricked by copying and pasting random objects

READ MORE

This approach has magical power in the right circumstances, but it is also problematic. The quality of the result is only as good as the quality of the dataset. The AI may develop faulty rules. For example, it might only recognise sheep if they are on grassy backgrounds, so whereas a human could easily spot a sheep in a living room, the AI might not.

In general, AI is bad at spotting unusual things. This has consequences for many use cases. Autonomous vehicles work fine most of the time, for example, but are also capable of catastrophic errors if something unexpected occurs. "It's inevitable that something will occur that an AI never saw in training," says Shane. At this point the AI may be smart enough to hand control to a human, but "humans are very, very bad at being alert after boring hours of idly watching the road," she writes.

Bias in AI is another issue. "Responsible AI", for example, was the title of a press session at the Google Next event in London this week. Everyone agrees on the importance of avoiding bias. Read Shane's book, though, and you will conclude that it is all but impossible.

AI inherits the bias of the data it is given, and if it comes from humans, it will not be neutral. Amazon, we are told, gave up on AI for identifying promising job applications because it could not eradicate gender bias, among other things. Simply removing gender information was insufficient as the AI used other clues to prefer male applicants – because they were preferred in the data on which it was trained. Huge effort is expended to work around problems like this, but it is difficult – made worse by the fact that working out exactly how an AI process has reached its conclusions can itself be a challenge.

One of Shane's points early in the book is that AI only works when it is specialised. You can teach it to play chess or identify images, but general intelligence in the style of a sci-fi robot like C-3PO is way beyond today's AI. How smart is AI? Maybe as smart as an earthworm, she says, for everyday examples, or as a honeybee for the most powerful neural networks. Human-like general intelligence is a long way off.

This is not a technical book but it does explain the essentials of topics including neural networks, training models, Markov chains and general adversarial networks. It is a good title to give someone who thinks AI will solve all our problems. Not that Shane is gloomy about AI; it is obvious that she loves what it does. There are some real dangers, though. "As more of our daily lives are governed by algorithms, the quirks of AI are beginning to have consequences far beyond the merely inconvenient," she writes.

Conclusion? "There is every reason to be optimistic about AI, and every reason to be cautious," she remarks. The problem, however, is not that AI is too smart, but that it is not smart enough. Trust it too much and... well, you have been warned.

You Look Like a Thing and I Love You is published by Wildfire, ISBN 9781472268990. ®

Sponsored: Beyond the Data Frontier

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER




Biting the hand that feeds IT © 1998–2019