This article is more than 1 year old

So woke: Microsoft's face-recog can now ID more people who aren't pasty white blokes

This would work great for ICE. We're just saying...

Microsoft has improved its facial recognition technology so that it is better at identifying humans who aren't white men.

Today's announcement of the breakthrough, which promised "significant improvements in the system's ability to recognize gender across skin tones," comes a week after CEO Satya Nadella sent a missive to employees assuring them that the technology would not be used for anything Redmond deems unethical.

The system can be used to pick out faces in a photograph, describe someone's age, gender, smile, head pose, and facial hair from their appearance, and identify a specific person given a database of mugshots.

Microsoft staffers concerned about an earlier blog post indicating that, yes, Microsoft was very happy to let the US Immigration and Customs Enforcement agency (ICE) get its hands on the Azure-hosted machine-learning tech need not have worried. As it turned out, the software did not work too well unless you were male and, er, white, the very people ICE seemingly has no problem with right now.

Similar technology adopted by London's Metropolitan Police got a kicking in a report that documented only two correct face matches – of innocent people – and no arrests using the system.

Facial recognition

Folks are shocked – shocked – that CIA-backed Amazon is selling face-recog tech to US snoops, cops

READ MORE

Microsoft said commercially available facial recognition technologies aren't great at identifying anyone who isn't a pasty white male, and struggle the most with women who have darker skin.

However, as far as its own cloud-based face-recognition tech is concerned, things are now looking up. Thanks to changes in the datasets used to train its machine-learning models, Microsoft was able to reduce the error rates for identifying men and women with darker skin tones by "up to 20 times."

There's no word on what the recognition rate was like before the team expanded the training sets, nor what it truly stands at now. The Register requested more precise figures from Microsoft, and was told by an apologetic spokesperson: "We don’t have any additional details we can share beyond what is in the blog post."

We asked Microsoft to be more specific on what it meant by "up to 20 times," and the Windows giant responded: "It’s a measure of the improvement in the error rates, but we’re not sharing the specific error rates."

It is difficult to ascertain how big the improvement actually is without knowing the starting point. Ultimately, the underlying issue is familiar. The AI can only be as good as the data on which it has been trained. Senior Microsoft researcher Hanna Wallach explained:

If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.

The team not only expanded the training datasets, but also collected additional data focusing on skin tone, age, and gender, and finally tweaked the classifier to improve the results, we're told. The resulting data set "held us accountable across skin tones", said Cornelia Carapcea, principal program manager on Microsoft's Cognitive Services Team.

The technology is available to customers who can access the Face API via Azure Services, and netizens can upload photos to see how accurate the system processes faces. El Reg had a play, and found it a little inaccurate.

However, if ICE was doing anything with Microsoft other than legacy emails, it would likely be very interested in something that improved recognition. ®

More about

TIP US OFF

Send us news


Other stories you might like