This article is more than 1 year old

Uber's disturbing fatal self-driving car crash, a new common sense challenge for AI, and Facebook's evil algorithms

Are we doomed?

Roundup It’s been a grim week for AI. The deadly Uber crash and fallout from the scandal between Facebook and Cambridge Analytica are a reminder of the ways algorithms can fail, and how they can be used against us.

Fatal Uber self-driving car vid - The video footage capturing the last moments just before one of Uber’s self-driving car crashes into a woman is disturbing, and reveals the conditions under which LiDAR failed.

The clip shows that it’s night and the roads are dark, and there is a slight flash of light highlighting the frame of a bicycle and some white shoes. By the time you see her full body, the car is just moments from the collision. All of this happens very quickly, and it’s difficult even for a human to make out.

It’s frightening that the LiDAR onboard didn’t detect her either, most likely due to the dark conditions and it didn’t help that she was also wearing a black jacket. LiDAR detects objects by emitting light and measuring the reflected beams. Since black is the least reflective color, it's more difficult for the system to make out its surroundings.

The video is also a reminder that humans still need to pay attention and be prepared to take over when the self-driving car fails. An internal camera shows that the human driver was distracted and his eyes are looking down at something, and by the time he looks up it’s too late. His eyes and mouth falls open in a moment of shock and panic and then the video cuts out.

The incident is believed to be the first time that a pedestrian was killed by a collision with a self driving car. After the death in Florida, Uber suspended all of its self-driving car tests in Arizona, San Francisco, Canada, and elsewhere.

We covered the story in more detail here.

Internal documents also revealed that the company’s self-driving cars were struggling to meet its target of driving 13 miles without any human intervention during testing on roads in Arizona, according to the New York Times.

It’s a shockingly low number, considering Waymo’s cars can supposedly drive an average of 5,600 miles before a human had to take control. Still, Uber continued to push its autonomous car efforts aggressively in the attempt to impress its top executives by being able to offer a self-driving taxi service by the end of the year and thus boost profits.

It was reported that Dara Khosrowshahi, Uber’s CEO, apparently thought about killing off the project but decided not to when he realised how important self-driving cars were to the business.

Testing common sense? - A challenge organised by a group of researchers from the Georgia Institute of Technology aims to test a computer’s vision, natural language, and knowledge skills.

The Visual Question Answering (VQA) knowledge challenge tests AI models by giving them an image and asking them at least three questions about the image. Here is an example scenario below.

VQA_18

An example question in the challenge. Image credit: VQA2018

It requires the system to have some vague idea of what a moustache is, know where it is normally located on the face, and realise that in the image is made out bananas instead of hair.

To do this, the system is trained on over 250,000 images taken from the COCO dataset, a set of captioned images of common objects, as well as over a million sample questions along with ten “concise, open-ended answers” for each question - one of which is the correct one.

“VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer,” according to the competition website.

Anyone is welcome to take part in the challenge and the organisers have released the training datasets online.

VQA 2018 is the third edition of the challenge and the submission deadline is on May 20. You can find more details here.

How Facebook’s AI can be used for propaganda - A notable Google AI engineer went on a long rant this week explaining how Facebook’s algorithms control and manipulate its users.

François Chollet, author of the deep learning software library Keras, argued that the social media giant’s grip over a user’s news feed acted as a “psychological control vector”. The algorithms used to push the most important and relevant posts to the top of a user’s news feed means that it decides who we will keep in touch with and what news articles and opinions we read. It means that Facebook essentially exerts control over our political beliefs and worldview.

Chollet talks about how using Facebook can create a reinforcement learning loop. “A loop in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see,” he tweeted.

He warned that AI is advancing rapidly, and Facebook are investing heavily in the technology with the hopes of being a leader in the field.

“We’re looking at a powerful entity that builds fine-grained psychological profiles of over two billion humans, that runs large-scale behaviour manipulation experiments, and that aims at developing the best AI technology the world has ever seen.

“Personally, it really scares me. If you work in AI, please don’t help them. Don’t play their game. Don’t participate in their research ecosystem. Please show some conscience,” he urged.

It’s an interesting take and it’s obviously what Cambridge Analytica believes is possible too. But how effective Facebook really is at political profiling and mass manipulation is difficult to measure and an important question to consider.

You can read the whole Twitter thread here.

Following his last comment, Twitter users were quick to point out if Google is any better than Facebook in this respect. Good question.

Self-driving shuttles in airports - Gatwick Airport, the second busiest airport in the United Kingdom, will be trialling self-driving shuttles for its employees for the first time.

It’s apparently the first airport in the world to invest in these shuttles, and has partnered with Oxbotica, a British autonomous vehicle startup.

In a statement, Gatwick Airport, said: “If the technology is proven in an airfield environment and following further trials, this project may be the precursor to a wide range of other autonomous vehicles being used on airport, including aircraft push back tugs, passenger load bridges, baggage tugs and transportation buses.

The shuttles are much simpler than the self-driving cars on the roads. They rely on sensors and do not need GPS for navigation. No passengers or aircraft will be involved in the testing, and the shuttle will only be used to take workers to and from the North and South terminals via the airside roads.

The data collected will also be shared with the UK government’s Department of Transport, Civil Aviation Authority as well as others, including XL Catlin, a insurance company interested in autonomous airfield vehicles. ®

More about

TIP US OFF

Send us news


Other stories you might like