This article is more than 1 year old

Tesla's autonomous lane changing software is worse at driving than humans, and more

Also, one man launched a legal battle against the police for using facial recognition cameras in the UK

Roundup Hello, here's a quick roundup of recent machine learning tidbits that you can digest after the long weekend.

UK’s first legal hearing over facial recognition: One man is taking on South Wales Police for violating his privacy after he claimed his face was scanned with facial recognition cameras without explicit permission.

The police have been trialing the technology across the UK for a while now and the results haven’t been great. Ed Bridges, a resident of Cardiff in Wales, reckons his face was snapped during a peaceful anti-arms trade protest when he was out doing a bit of Christmas shopping.

A three-day hearing was held in Cardiff High Court last week. It’s the first legal challenge to facial recognition in the United Kingdom and Bridges’ lawyers argued against the technology on the grounds of right to privacy, equality and data protection

“The police started using this technology against me and thousands of other people in my area without warning or consultation,” he said in a statement.

“It’s hard to see how the police could possibly justify such a disproportionate use of such an intrusive surveillance tool like this, and we hope that the court will agree with us that unlawful use of facial recognition must end, and our rights must be respected.”

New AI2 office in Israel: The Allen Institute for Artificial Intelligence, an AI lab funded by the late Microsoft cofounder Paul Allen, has opened a new branch in Tel Aviv, Israel.

The new $8.4m hub will be led by Yoav Goldberg, the research director, and a professor at Bar Ilan University. Goldberg and the gang will be focused on natural language processing (NLP). AI2 was first launched in Seattle, Washington and is led by CEO Oren Etzioni.

It has several NLP projects ranging from Aristo, a system that helps dissect scientific research papers, to Semantic Scholar, a machine learning search engine to help people find the find the relevant papers to read.

Unconstrained College Students Dataset: A computer science professor working at the University of Colorado, Colorado Springs (UCCS), has been blasted for filming students on campus to train facial recognition systems without them knowing.

Snapshots of over 1,700 people hanging out on campus were taken with a surveillance camera for 20 days between February 2012 and September 2013, first reported by the Colorado Springs Independent.

Experts blasted Terrance Boult, a professor of computer science at UCCS for his carelessness. David Maass, senior investigative researcher with the Electronic Frontier Foundation, a nonprofit digital rights group based in San Francisco, said the project “essentially [normalizes] Peeping Tom culture”.

Boult said taking photos of people in public isn’t illegal, but agreed that the facial recognition could be misused if used for nefarious purposes. The dataset containing the snapshots known as UnConstrained College Students is designed to train models to identify faces in challenging computer vision conditions such as blurriness, challenging poses, or people partially blocked by objects.

UCCS even held its second competition to find the best facial recognition model trained on this dataset last year.

Boult has tried to keep people’s identities in the dataset private. He didn’t hand over the images to government agencies and private companies until all the students in database had graduated. It also doesn’t include their names, and those that used the dataset in 2017 were asked to sign a legal document promising not to publish any photos from it.

Facial recognition is unregulated technology, and the US Congress held a hearing to discuss current dangers this week. We covered it in more detail here, in case you missed it.

Tesla gives customers more options when using autopilot: Drivers can now decide if they want Tesla’s semi-autonomous vehicles to switch lanes on their behalf when in autopilot mode.

The Navigate autopilot software was released last year and helps customers take the right exits at highways and suggests making lane changes. The latter function can now be overridden if drivers hold the steering wheel, brake, or flick the turn-signal stalk on and off, according to Consumer Reports, who test drove a Tesla Model 3.

Tests run by the nonprofit publication also found that Navigate performed worse than human drivers when trying to change lanes automatically. Law enforcement representatives who spoke to Consumer Reports said the software cut off other cars without giving them enough space and sped past cars in ways that “violate state laws”.

Drivers often had to step in to prevent Navigate from potential dangers, the researchers found. “The system’s role should be to help the driver, but the way this technology is deployed, it’s the other way around,” said Jake Fisher, Consumer Reports’ senior director of auto testing.

“It’s incredibly nearsighted. It doesn’t appear to react to brake lights or turn signals, it can’t anticipate what other drivers will do, and as a result, you constantly have to be one step ahead of it.”

Yikes.

A Tesla spokesperson retaliated and said: “Navigate on Autopilot is based on map data, fleet data, and data from the vehicle’s sensors. However, it is the driver’s responsibility to remain in control of the car at all times, including safely executing lane changes.” ®

More about

TIP US OFF

Send us news


Other stories you might like