This article is more than 1 year old

Dodgy US government facial data grab, self-nannying cars, and a chance for non-techies to learn about AI

The week's news in AI and machine learning

Roundup Hello, here's a quick rundown on what's been happening in the world of machine learning.

Cameras and sensors coming to all Volvo cars: Volvo is adding driver-watching cameras and sensors to all of its cars to tackle drink-driving and other unsafe motoring.

The Chinese-owned famously Swedish automaker has set itself a lofty goal of eradicating all fatal accidents involving its cars by 2020, which it hopes to achieve in part by altering driver behavior.

“When it comes to safety, our aim is to avoid accidents altogether rather than limit the impact when an accident is imminent and unavoidable,” Henrik Green, senior veep of research and development at Volvo Cars, said. “In this case, cameras will monitor for behavior that may lead to serious injury or death.”

The cameras inside the cars will track the driver’s eye movements. It’ll try to catch intoxicated or tired motorists if they snooze off at the wheel, or negligent ones paying more attention to their phones than the road. Sensors will also detect any lack of steering, and judge if a driver is weaving in and out of traffic dangerously or if the human's reaction times are too slow.

All of this information will be fed into a system to determine which actions to take. The car will either flash warning signals, slow down, or alert the Volvo on Call assistance to contact an operator to provide help if the vehicle has run out of fuel or has punctures.

“There are many accidents that occur as a result of intoxicated drivers,” said Trent Victor, professor of driver behavior at Volvo Cars. “Some people still believe that they can drive after having had a drink, and that this will not affect their capabilities. We want to ensure that people are not put in danger as a result of intoxication.”

Volvo hopes to roll out the cameras and sensors as part of its SPA2 vehicle platform. It’s likely that the data will be processed by machine-learning computer-vision algorithms, considering the tech will be running off Nvidia’s Drive AGX chip that hosts six different processors optimized for "AI, sensor processing, mapping and driving." Volvo wasn’t immediately available to confirm whether its system will use actual artificial intelligence, or just heuristics and if statements.

In addition to altering driver behavior, Volvo is also restricting the maximum speed of its cars to 180kph (111mph) in 2021.

New AI research lab alert! Stanford University, nestled in the prime location of Silicon Valley, has launched the Human-Centered Artificial Institute to foster a more interdisciplinary approach to studying AI.

The institute is led by Fei-Fei Li, a computer science professor known for her work in computer vision, and John Etchemendy, a philosophy professor at Stanford University.

“The way we educate and promote technology is not inspiring to enough people,” Li said. “So much of the discussion about AI is focused narrowly around engineering and algorithms. We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.”

The study of AI won’t be limited to computer science, but will draw upon different disciplines including: “business, economics, education, genomics, law, literature, medicine, neuroscience, philosophy and more.” AI has been criticised for its lack of diversity, so hopefully opening up the field to people from different academic backgrounds will make it more representative.

The US government is testing facial recognition systems on... what now?! Pictures of dead people, abused children, and immigrants, were used, directly or indirectly, by America's National Institute of Standards and Technology (NIST) to test the performance of facial recognition systems, it was claimed this month.

A trio of academics uncovered that NIST, part of the US government's Department of Commerce, maintained collections of at least some of these aforementioned photos without the consent of those pictured, according to an article in Slate. NIST, for what it's worth, insists it stores no images of exploited children: those are kept on Homeland Security servers.

The academics' discovery was made by looking through some public datasets and submitting Freedom of Info requests. A detailed research paper is expected to be published this summer.

It’s particularly worrying since NIST runs the Facial Recognition Verification Testing (FRVT) program used to benchmark models across research and industry. Computer vision systems are judged on their ability to match an image with a particular photo in a dataset in fast and accurate manner. NIST is also in charge of developing technical federal guidelines on the reliability, robustness, and security of AI systems as part of the US government's AI initiative.

Scraping together datasets of people’s faces is difficult. Academic researchers and engineers at companies often just swipe them from Creative Commons sources or just lift them from websites without permission. It poses serious questions when those photos are mugshots, child pornography, or people applying for American visas.

“How do we understand privacy and consent in a time when mere contact with law enforcement and national security entities is enough to enroll your face in someone’s testing?” the researchers Os Keyes, Nikki Stevens, and Jacqueline Wernimont, wrote.

"How will the black community be affected by its overrepresentation in these data sets? What rights to privacy do we have when we’re boarding a plane or requesting a travel visa? What are the ethics of a system that uses child pornography as the best test of a technology?"

Jennifer Huergo, the director of media relations at NIST, told Slate: “The data used in the FRVT program is collected by other government agencies per their respective missions.

"In one case, at the Department of Homeland Security (DHS), NIST’s testing program was used to evaluate facial recognition algorithm capabilities for potential use in DHS child exploitation investigations. The facial data used for this work is kept at DHS and none of that data has ever been transferred to NIST. NIST has used datasets from other agencies in accordance with Human Subject Protection review and applicable regulations.” ®

Updated to add

NIST's Heurgo has been in touch to say: "NIST has not compiled, nor does it have in its possession, photos of the victims of child exploitation. As we stated, that data is held by the Department of Homeland Security in support of its efforts to fight child exploitation.

"Furthermore, NIST is not testing industry standards, we are providing independent government evaluations of prototype face recognition technologies. These evaluations provide developers, technologists, policy makers and end-users with technical information that can help them determine whether and under what conditions facial recognition technology should be deployed."

More about

TIP US OFF

Send us news


Other stories you might like