This article is more than 1 year old

Racist self-driving car scare debunked, inside AI black boxes, Google helps folks go with the TensorFlow...

...and AI worker quits over killer robot plans

Roundup Hello, here's a quick recap on all the latest AI-related news beyond what we've already reported this week.

Are self-driving cars racist? You may have seen news reports that autonomous cars are unlikely to detect pedestrians crossing the road if they have dark skin, and thus run them over. And yes, the internal alarm bells in your head should be going off, as a closer look at the research behind the stories shows all those headlines screaming about racist AI are a little off the mark.

The academic paper at the heart of the matter described a series of experiments testing different computer vision models, such as the Faster R-CNN model and R-50-FPN, on images of pedestrians with different skin tones. The study's authors, based at the Georgia Institute of Technology in the US, described how they paid humans to look through the collection of roughly 3,500 photos, and individually tag people in the snaps as either “LS” for light skin or “DS” for dark skin, and then trained the neural networks using this dataset. The eggheads took some steps to ensure the manual classification process did not suffer from any cultural biases.

They found that their models subsequently struggled to detect people with dark skin, which led them to conclude: “This study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models.”

That led the internet to conclude that seemingly racist robo-ride software will ignore and run over black pedestrians as they cross the road. However, folks seem to have forgotten that while today's potentially commercially viable self-driving cars use video cameras to see around them, they also have another vision system: lidar. This uses laser light pulses to detect the outline of people crossing the road regardless of their skin color.

So even if the camera-based vision of a self-driving car was flawed, and unable to see black people, lidar should still pick out pedestrians regardless of color. And there's no guarantee autonomous vehicles are using the same models, algorithms, and datasets used in this academic study. The likes of Waymo are, hopefully, using something more sophisticated. Therefore, while it's certainly worthwhile investigating and flagging this up as a potential problem, it's just not representative of a realistic self-driving car scenario.

In effect, this study concluded that driverless cars should not rely solely on camera-based vision unless these sorts of biases are taken into account. Good news is, pretty much everyone working on a potentially viable autonomous system is using some kind of ranging technology, anyway, typically lidar.

Don’t build killer robots! An employee quit her job at Clarifai AI, a startup focused on computer vision, to fight against building autonomous weapons.

Liz O’Sullivan spoke about her experience with the American Civil Liberties Union, a nonprofit based in New York, about leaving Clarifai AI after she didn’t approve of the company’s direction. She gave a list of reasons why killer robots should never be built, warning about the lethal consequences of rogue drones, how easily they can be hacked, and how the prospect of war can escalate at terrifying speeds if hordes of machines can be spun up in huge numbers.

“We must remind our government that humanity has been successful in instituting international norms that condemn the use of chemical and biological weapons,” she urged.

"When the stakes are this high and the people of the world object, there are steps that governments can take to prevent mass killings. We can do the same thing with autonomous weapons systems, but the time to act is now."

If that hasn’t scared you off yet, you can read her writing in more detail here.

Making TensorFlow more private: Good news for you AI security nerds, Google has just released TensorFlow Privacy, a machine learning framework that deals with the problems of training models with sensitive data.

Neural network models trained on large datasets have a bad habit of overfitting and memorizing details in the data. That’s no good if you’re training it on private material like people’s personal photos, emails, or medical records.

There are techniques like differential privacy to prevent miscreants from siphoning off the training data. Now, Google has made it easier to implement differential privacy with TensorFlow Privacy. You don’t have to be a mathematics buff to understand how the technique works in order to use it.

If that sounds interesting to you then look at some of the examples of how it can be applied to your machine learning code and download the software here.

Google also announced other goodies this week following its TensorFlow Dev Summit, which we wrote more about here.

Deploying TensorFlow code: More TensorFlow news: DeepMind have released TF-Replicator, a software library that helps developers deploy TensorFlow code across GPUs and Cloud TPUs.

When building a model, it’s often optimized for specific hardware architectures. Although TensorFlow supports CPUs, GPUs, and TPUs, trying to deploy the same model across different chips is a bit of a nightmare.

This is where TF-Replicator comes in: “[It] allows researchers to target different hardware accelerators for Machine Learning, scale up workloads to many devices, and seamlessly switch between different types of accelerators,” DeepMind explained this week. “While it was initially developed as a library on top of TensorFlow, TF-Replicator’s API has since been integrated into TensorFlow 2.0’s new tf.distribute.Strategy.”

The tool was developed internally by DeepMind’s Research Platform Team, who used it to train BigGAN, a generative adversarial network that produces hyper realistic images.

You can learn more about it here.

What are your neurons looking at? Researchers at Google and OpenAI have released a new tool that helps visualize the interactions between neurons in AI systems.

Neural networks are often described as “black boxes”. Inputs are fed to the machine, and it magically spits out an output. But what goes on inside? How does it arrive at its answer? No one really knows since it all boils down to heavy number crunching as data is processed through all the model’s hidden layers.

The researchers have tried to crack this so-called black box by creating “activation atlases”, a technique that allows you to probe how a computer vision model tells apart similar items like frying pans and woks.

It works by creating a map made out of a series of data points. These markers represent how the individual pixels in an image are processed into specific vectors when the training data is being processed by the neural network.

More similar vectors are placed closer together and visualized as a particular cell in the activation atlas so researchers can tell what features the neural network is looking at when its recognizing objects in the image.

You can read more about it in more detail here. ®

More about

TIP US OFF

Send us news


Other stories you might like