This article is more than 1 year old

Boffins harnessed the brain power of mice to build AI models that can't be fooled

How neuroscience can help AI

In a bizarre experiment, researchers recorded the brain activity of mice staring at images and used the data to help make computer vision models more robust against adversarial attacks.

Convolutional neural networks (CNNs) used for object recognition in images are all susceptible to adversarial examples. These inputs have been tweaked in some way, whether its adding random noise or changing a few pixels here or there, that forces a model to incorrectly recognize an object. Adversarial attacks cause these systems to mistake an image of a banana for a toaster, or a toy turtle for a rifle.

Machine learning engineers have tried to develop all sorts of techniques to make models less prone to these types of attacks. Now, a group of researchers led by Baylor College of Medicine, Texas, have turned to mice for inspiration, according to a paper released on arXiv.

“We presented natural images to mice and measured the responses of thousands of neurons from cortical visual areas,” they wrote.

“Next, we denoised the notoriously variable neural activity using strong predictive models trained on this large corpus of responses from the mouse visual system, and calculated the representational similarity for millions of pairs of images from the model’s predictions.”

As you can tell the paper is pretty jargony. In simple terms, the researchers recorded the brain activity of the mice staring at thousands of images and used that data to build a similar computational system that models that activity. To make sure the mice were looking at the image, they were “head-fixed” and put on a treadmill.

Making computer vision models more similar to mice brains.

The researchers then used that model to “regularize” ResNet-18, a type of CNN. Regularization alleviates the effects of overfitting, where systems learn to pick up on specific patterns in training data that don’t generalize well when it’s presented with new unseen data.

Essentially, they tweaked ResNet-18 so that it contains similar features to the system that was modeled from the mouse brains. “It biases the original CNN towards a more brain-like representation,” they explained.

mouse

Who will save us from deepfakes? Other AIs? Humans? What about vastly hyperintelligent pandimensional beings?

READ MORE

When the CNN was tasked with classifying a different set of images that were not presented to the mice, its accuracy was comparable to a ResNet-18 model that had not been regularized. But as the researchers began adding random noise to those images, the performance of the unregularized models dropped more drastically compared to the regularized version.

“We observed that the CNN model becomes more robust to random noise when neural regularization is used,” the paper said. In other words, the mice-hardened ResNet-18 model is less likely to be fooled by adversarial examples if it contains features that have been borrowed by real biological mouse brains.

The researchers believe that incorporating these “brain-like representations” into machine learning models could help them reach “human-like performance” one day. But although the results seem promising, the researchers have no idea how it really works.

“While our results indeed show the benefit of adopting more brain-like representation in visual processing, it is however unclear which aspects of neural representation make it work. We think that it is the most important question and we need to understand the principle behind it,” they concluded. ®

More about

TIP US OFF

Send us news


Other stories you might like