Oh dear! Amazon's facial recognition is racist and sexist – and there's a JLaw deep fake that will make you want to tear out your eyes
The week's other news in AI
Roundup Here's a roundup of this week's other AI news. In short: experts continue to snub Amazon's facial recognition service Rekognition, and there's a new deepfake for you to stare at in horror.
China and the US are miles ahead: A study compiled by the UN World Intellectual Property Organization found that China and the US both dominate the AI industry, with both countries leading the way for patents and academic research.
It’s no surprise, really, experts have been preaching this for years. The specifics in the report, however, are still interesting. Here are some of the key takeaways:
- Nearly 340,000 AI-related patents have been filed since the term was coined in the 1950’s.
- Out of those applications, over half of them were submitted since 2013 when the boom kicked in.
- The recent rise of AI is down to the resurrection of machine learning, so it’s no surprise that 49 per cent of patent applications are computer-vision related - an area that has had the most success.
- IBM has filed the most AI patents compared to any other company or university with 8,290 inventions so far. Next, is Microsoft with 5,930. Japan’s Toshiba is third at 5,223. No Chinese companies made the top three, but the report notes that the number of applications filed has increased a whopping 70 per cent annually from 2013 to 2016, so it’s catching up fast.
- 434 AI companies have been acquired since 1998, over half - 53 per cent - of all takeovers were made since 2016.
- Alphabet has scooped up the most AI startups. Other US giants also investing include Apple and Microsoft. Half of the top 20 AI papers are from Chinese companies and research institutions.
You can read the whole report here.
Amazon’s Rekognition PR disaster continues: Amazon’s mug matching technology, Rekognition, has made headlines again.
IN July an investigation by the American Civil Liberties Union (ACLU) showed Rekognition was being sold to government agencies and could be inaccurate - especially when trying to identify people with darker skin tones.
Now, a research paper published by the Massachusetts Institute of Technology Media Lab, provides further proof. Rekognition performed worse when analysing pictures of women - they were misidentified as men 19 per cent of the time - and results dropped even further when the women were black.
Matt Wood, general manager of AI at Amazon Web Services, hit back and said the researchers were studying an outdated version of Rekognition, and that it was trying to improve its product.
He also pointed out that when the service was being used by law enforcement, Amazon recommended a 99 per cent confidence threshold. The percentage defines how confident the system is output result.
But unfortunately, it looks like police departments don’t really care. The Washington County Sheriff’s Office in Oregon, one of Amazon’s few publicly identified customers, told Gizmodo: “We do not set nor do we utilize a confidence threshold.”
Oh dear, since there is currently no governmental regulation around the use facial recognition, the police can, technically, use it however it wants.
Fighting facial recognition biases: While we’re on the topic of facial recognition, IBM has published a dataset that promises to be more diverse to combat biases.
The Diversity in Faces dataset contains a million labelled photos of human faces scraped from the Creative Commons sections of Yahoo and Flikr. Developers selected the images based on a variety of factors, including: head length, nose length, forehead height, facial symmetry, age, gender, pose and so on.
“The challenge in training AI is manifested in a very apparent and profound way with facial recognition technology,” IBM said.
"Today, there can be difficulties in making facial recognition systems that meet fairness expectations. The heart of the problem is not with the AI technology itself, per se, but with how the AI-powered facial recognition systems are trained. For the facial recognition systems to perform as desired – and the outcomes to become increasingly accurate – training data must be diverse and offer a breadth of coverage."
The dataset isn’t publicly available, and you have to apply for access if you want to get hands on it.
New viral deepfake alert: Hey, have you ever wondered what Jennifer Lawrence would look like if she had the face of Steve Buscemi?
Well, here’s your lucky day.
The horrific mashup was created by VillanGuy, a data analyst living in Washington.
It’s not too bad, actually. The skin tones match and the facial expressions aren’t completely out of place. The footage is taken from a clip of Jennifer Lawrence giving a speech at this year’s Golden Globes awards.
Sponsored: Becoming a Pragmatic Security Leader