Emergent Tech

Artificial Intelligence

Google learns to smile, because AI's bad at it

Biased models mean bad decisions for women and some races. Google boffins think they've improved things a bit

By Richard Chirgwin


Google's taken a small step towards addressing the persistent problem of bias in artificial intelligence, setting its boffins to work on equal-opportunity smile detection.

In a paper published at arXiv December 1, Mountain View trio Hee Jung Ryu, Margaret Mitchell and Hartwig Adam laid out the results of research designed to handle the twin problems of gender and race diversity when machine learning is applied to images.

Biased models have become a contentious issue in AI over the course of the year, with study after study documenting both the extent of algorithmic bias, and the real-life impacts such as women seeing ads for low-paying jobs and African-Americans being sent more ads about being arrested. In spite of this, researchers are still comfortable making phrenology-like claims about identifying criminal faces, or believing that their AI can spot beautiful women.

Google's authors agreed that bias is an issue, and wrote “users have noticed a troubling gap between how well some demographics are recognised compared with others”. Problems they noted included mis-gendering a woman simply because she's not wearing makeup, or being unable to classify a black face at all.

The paper stated that Google is not seeking to classify people by race (since that's both unethical and arbitrary), and the authors noted that using AI to classify race or gender needs the individual's consent.

Nonetheless, training race and gender recognition into the model is necessary if the AI is going to reliably identify a smile, and that's how the researchers approached the problem: “At the core of this work lies the idea that faces look different across different races and genders, and that it is equally important to do well on each demographic group”, the researchers wrote.

First, the researchers applied a more granular view of misclassifications: “we report values for accuracy per subgroup … [and] we also introduce a metric that evaluates false positive rates and false negatives rates in a way that is robust to label and subgroup imbalances”.

That helped them correct for the common sample bias in training data sets, that many of them have a preponderance of white European samples.

With that classification in hand, the researchers then applied over-sampling to groups under-represented in the dataset. For subgroups too small for that to work, they made their own decisions (an “off-line oversampling method in order to make sure each training batch contains faces across all race × gender”, as they wrote).

The results: up to 99 per cent gender accuracy (on the 200,000 image CelebA dataset). On the Faces of the World (FotW) dataset, gender and race accuracy was above 90 per cent for most subgroups. On a dataset collected by scraping 100,000 celebrity images from the Web, the researchers wrote, they trained their model to “98 per cent or greater” area under the curve.

Which brings us to smile detection: the more granular pre-processing yielded smile detection accuracy over 90 per cent across the whole dataset, by gender, by race, or subgrouped by both gender and race. ®

Sign up to our NewsletterGet IT in your inbox daily


More from The Register

Need a facial recognition auto-doxxx tool? Social Mapper has you covered

Use this to match profiles to names of people at an organization. Nothing could possibly go wrong here

Facial recognition tech to be used on Olympians and staff at Tokyo 2020

NEC to provide NeoFace kit to 40-plus venues for the games

Rights group launches legal challenge over London cops' use of facial recognition tech

Court asked to grant permission for judicial review of 'inaccurate' snooping tech

Boffins craft perfect 'head generator' to beat facial recognition

Think Face/Off, in software, plus some digital touchup

US judge to Facebook: Nope, facial recognition lawsuit has to go to jury

Too many disputes over how the tech and law work

London's Met Police: We won't use facial recognition at Notting Hill Carnival

But cops' trial of controversial tech will continue

Facial recognition software easily IDs white men, but error rates soar for black women

Updated Even with a decent dataset to learn from, software gets worse the darker your skin

You too can fool AI facial recognition systems by wearing glasses

All you need is to, erm, give the computers some nasty training data

South Wales cops crow about facial recognition arrests on social media

Cams on in Cardiff as activists decry 'infringement' of rights

Fake NIPS slip site scandalizes AI world

Machine-learning conference organizers warn someone is trying to get boffins into bed