This article is more than 1 year old

Face-recognizing cop body cams hit another hurdle, genderless voice assistants, and more

One of these days we'll use machine learning to write these AI news summaries

Roundup Let's catch up with recent goings on in the world of artificial intelligence.

California set to ban face-recognition from police body cams: The proposed Body Camera Accountability Act, a law bill that seeks to ban cops from wearing face-recognizing cameras for three years, has passed the California Senate. It's now over to Governor Gavin Newsom to sign into the Golden State's statute books.

Using machine-learning algorithms to detect and match faces in body-cam videos to photos in a database of suspects and perps is risky. There is ample room for software to make false matches, particularly when analyzing women and people with darker skin. This is why cities including San Francisco and Oakland in Cali, and Somerville in Massachusetts, have banned its local governments and officers from rolling out the tech.

Now, California has gone one step further, hoping to place a temporary state-wide ban on law enforcement using the technology with body cameras – it's hoped the three years will give boffins time to improve their machine-learning code.

“Face-scanning police body cameras have no place on our streets, where they can be used for dragnet surveillance of people going about their private lives, including their locations and personal associations,” said Matt Cagle, technology and civil liberties attorney for the American Civil Liberties Union, a non-profit based in New York.

“With this bill, California is poised to become one of the first states in the country to prevent its residents from becoming test subjects for an invasive tracking technology proven to be fundamentally incompatible with civil liberties and human rights. Other states should follow suit.”

Experts in the AI and law enforcement community also called for a moratorium on facial recognition during a US Congress hearing in May.

Meanwhile in China... Facial recognition is so widespread in the Middle Kingdom that a young woman was left unable to access payment systems nor sign into work after undergoing plastic surgery, it is claimed. Her nose job apparently altered her appearance so much, she was no longer recognized by computer systems.

Was a business really robbed by crooks using a deepfake AI? Stories of an unnamed British energy company being hoodwinked out of €220,000 by a criminal who used AI voice-cloning technology made the rounds at the start of this month. It's claimed a senior executive was tricked into wiring the dosh from the business to an account in Hungary by someone who, on the phone, sounded like the manager's boss. It's said the voice was generated by an AI mimicking the big cheese, and ordered the hapless suit to transfer the funds to what was supposed to be a supplier.

However, there’s little evidence to back up the claims, it seems.

Tal Be’ery, co-founder of cryptocurrency company ZenGo and a security research manager, read coverage of the caper with suspicion. He reckons there is no convincing evidence that the big boss's voice was imitated by software.

The call wasn’t recorded, all we know is that the caller had a slight German accent, and no suspects have been found. Also, German news mag Der Spiegel confirmed that the UK energy firm's insurance company has zero evidence a deepfake AI was used.

“There are only two ways to objectively identify this attack as a deepfake AI scam,” Be’ery opined earlier this month.

"One, catch the fraudsters and learn about their modus operandi. That did not happen, however, as the report mentions, 'investigators haven’t identified any suspects.' Two, have some recordings of the scam calls and analyze them to find some artefacts specifically related to deepfake AI tools. The report, nonetheless, explicitly determines that 'the call was not recorded.'"

Be’ery told The Register he has been trying to contact journalists to get them to clarify their stories.

A genderless voice assistant for Apple: Siri, Alexa, and whatever the heck Google's AI assistant is called, normally sport female-sounding voices. You can change them to be male, of course, though what if you want a realistic voice that sounds, er, genderless?

Well, here’s Q. The computer-generated voice was created by a group of engineers, linguists, and sound designers led by Copenhagen Pride and Virtue, a creative agency, according to WiReD.

Here’s what Q sounds like:

Youtube Video

It’s based on one person’s voice that uses sound frequencies perceived as gender neutral – somewhere in the region of 145 and 175Hz.

The US government is expecting to spend about $970m on AI research and development in 2020, according to a budget report [PDF] out this month.

Taylor Swift threatened to sue Microsoft over its naughty internet bot, Tay: Remember Tay? You know the Twitter bot that suddenly turned into a feminist-hating, Nazi sympathizer when it was hijacked by web trolls.

Yeah, that one. It turns out that American pop superstar Taylor Swift, who is also nicknamed “Tay Tay”, threatened Microsoft with legal action over its AI experiment, according to Redmond’s president Brad Smith. The debacle was bad PR for Swift, who has made millions on the back of her sweet all-American girl-next-door image.

In Smith’s new book Tools and Weapons he said he received an email from a lawyer on behalf of Swift that said: “the use of the name Tay created a false and misleading association between the popular singer and our chatbot, and that it violated federal and state laws,” according to The Guardian.

Luckily, Microsoft quickly shut Tay down before it got too out of hand and didn’t hear from Swift’s lawyers again. ®

More about

TIP US OFF

Send us news


Other stories you might like