This article is more than 1 year old

AI software that can reproduce like a living thing? Yup, boffins have only gone and done it

Talk about telling your code to go screw itself

A pair of computer scientists have created a neural network that can self-replicate.

“Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems,” they argue in a paper popped onto arXiv this month.

It’s an important process in reproduction for living things, and is an important step for evolution through natural selection. Oscar Chang, first author of the paper and a PhD student at Columbia University, explained to The Register that the goal was to see if AI could be made to be continually self improving by mimicking the biological self-replication process.

“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection - just like in nature - if there was a self-replication mechanism for neural networks.”

The researchers compare their work to quines, a type of computer program that learns to produces copies of its source code. In neural networks, however, instead of the source code it's the weights - which determine the connections between the different neurons - that are being cloned.

The researchers set up a “vanilla quine” network, a feed-forward system that produces its own weights as outputs. The vanilla quine network can also be used to self-replicate its weights and solve a task. They decided to use it for image classification on the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from zero to nine.

These networks are smaller and have a maximum of 21,100 parameters compared to several million for standard image recognition models.

Accuracy?

The test network required 60,000 MNIST images for training, another 10,000 for testing. And after 30 runs, the quine network had an accuracy rate of 90.41 per cent. It’s not a bad start, but its performance doesn’t really compare to larger, more sophisticated image recognition models out there.

The paper states that the “self-replication occupies a significant portion of the neural network’s capacity.” In other words, the neural network cannot focus on the image recognition task if it also has to self-replicate.

“This is an interesting finding: it is more difficult for a network that has increased its specialization at a particular task to self-replicate. This suggests that the two objectives are at odds with each other,” the paper said.

Chang explained he wasn’t sure why this happened, but it’s what happens in nature too.

How DeepMind's AlphaGo Zero learned all by itself to trash world champ AI AlphaGo

READ MORE

“It's not entirely clear why this is so. But we note that this is similar to the trade-off made between reproduction and other tasks in nature. For example, our hormones help us to adapt to our environment and in times of food scarcity, our sex drive is down-regulated to prioritize survival over reproduction,” he said.

So at the moment it looks like self-replication in neural networks isn’t all that useful, but it’s still an interesting experiment.

“To our knowledge, we are the first to tackle the problem of building a self-replication mechanism in a neural network. As such, our work should be best viewed as a proof of concept,” he added.

But the researchers hoped that one day it may come in handy for computer security or self-repair in damaged systems.

“Learning how to enhance or diminish the ability for AI programs to self-replicate is useful for computer security. For example, we might want an AI to be able to execute its source code without being able to read or reverse-engineer it, either through its own volition or interaction with an adversary.

Self-replication is used for self-repair in damaged physical systems, he noted. "The same may apply to AI, where a self-replication mechanism can serve as the last resort for detecting damage, or returning a damaged or out-of-control AI system back to normal,” Chang added. ®

More about

TIP US OFF

Send us news


Other stories you might like