Emergent Tech

Artificial Intelligence

Skynet it ain't: Deep learning will not evolve into true AI, says boffin

Neural networks sort stuff – they can't reason or infer

By Katyanna Quach


Deep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won't take us all the way to artificial general intelligence, according to a recent academic assessment.

Gary Marcus, ex-director of Uber's AI labs and a psychology professor at the University of New York, argues that there are numerous challenges to deep learning systems that broadly fall into a series of categories.

The first one is data. It's arguably the most important ingredient to any deep learning system and current models are too hungry for it. Machines require huge troves of labelled data to learn how to perform a certain task well.

It may be disheartening to know that programs like DeepMind's AlphaZero can thrash all meatbags at chess and Go, but that only happened after playing a total of 68 million games against itself. That's far above what any human professional will play in a lifetime.

Essentially, deep learning teaches computers how to map inputs to the correct outputs. The relationships between the input and output data are represented and learnt by adjusting the connections between the nodes of a neural network.

An image of a simple neural network with two hidden layers

Its limited knowledge gleaned from the data given during the training process means it can only perform in limited environments. AlphaZero may be a single algorithm that combines Monte Carlo Tree Search and self-play, a technique in reinforcement learning, but it required two different systems to play chess and Go.

The same skills learnt from one game can't be transferred to another. That's because, unlike humans, machines don't actually grasp important concepts, Marcus said. It may be selecting to move a pawn, or knight, or queen across the board, but it doesn't learn the logical and strategic thinking useful for Go. In fact, it doesn't even really understand what any of those pieces really represent, it just sees the game as a series of rules and patterns.

This brittleness means that current AI systems struggle with "open-ended inference". "If you can't represent nuance like the difference between 'John promised Mary to leave' and 'John promised to leave Mary', you can't draw inferences about who is leaving whom, or what is likely to happen next," Marcus wrote.

Facebook developed the popular bAbI datasets needed to test machine reasoning on text, but current models can only answer questions if the answer is explicitly in the text.

"Humans, as they read texts, frequently derive wide-ranging inferences that are both novel and only implicitly licensed, as when they, for example, infer the intentions of a character based only on indirect dialog," the paper said.

Another good example is to look at how easily AI can be fooled. There are numerous cases where changing a pixel here or there can trick a computer into believing a car is a dog, or a cat is a bowl of guacamole.

The limits of deep learning has led to success being measured in terms of solving "self-contained" problems, such as pushing benchmark scores on known datasets in Kaggle competitions. Marcus goes further and believes that trickier challenges that rely on common sense reasoning are actually beyond what deep learning can do.

"Roughly speaking, deep learning learns complex correlations between input and output features, but with no inherent representation of causality." Understanding causality is vital if machines are to learn to reason.

To make matters worse, researchers don't understand how these neural networks really work. They perform a series of matrix multiplications, but untangling them to understand how these "black boxes" arrive at answers remains an open question. Researchers are racking their brains to come up with ways to make them more transparent so that they can be used more widely in finance and healthcare.

So if you're feeling pangs of existential dread, don't worry. It's unlikely our robot overlords will take over anytime soon. The prospect of artificial general intelligence may indeed arrive one day, but it'll take more than deep learning to get there. ®

Sign up to our NewsletterGet IT in your inbox daily


More from The Register

Alibaba wants to ship its own neural network silicon by H2 2019

Middle Kingdom calls for international AI arm linkage

Like the Architect in the Matrix... but not: Nvidia trains neural network to generate cityscapes

GPU biz also open sources PhysX and teases beefy graphics card

How to stealthily poison neural network chips in the supply chain

Your free guide to trick an AI classifier into thinking an umbrella is the Bolivian navy on maneuvers in the South Pacific

Should I infect this PC, wonders malware. Let me ask my neural net...

Black Hat How does it work? Nobody really knows what goes on in the black box

Sony open-sources NNabla neural network learnings

En-NNabla-blement of answer to Google's TensorFlow?

Qualcomm's neural network SDK made free for all comers

Facebook uses it for AR apparently. What? That's a positive? Our bad

Boffins build neural networks fashioned out of DNA molecules

And you thought AI couldn't get any more mind-boggling

Hands on with neural-network toolkit LIME: Come now, you sourpuss. You've got some explaining to do

Everyone loves a manic pixel dream swirl

Waymo presents ChauffeurNet, a neural net designed to copy human driving

TL;DR: The engineers realize that human data isn't enough to teach robots how to drive

You should find out what's going on in that neural network. Y'know they're cheating now?

Interpretability in machine learning is a minefield