This article is more than 1 year old

Mobileye's autonomous cars are heading to California. But they're not going to kill anyone. At least not on purpose

Human CEO outlines safety policy for other humans

Analysis It's hard to know at what point in Amnon Shashua's presentation on autonomous cars that I started fearing for my life. But it began in earnest when others started asking questions and he started answering them.

As CEO of Mobileye, Shashua has been on the forefront on self-driving car technology for some time. He started the company in 1999 and created the first version of its EyeQ chip five years later.

He is seen as a pioneer and a leading voice. And for that reason he given a prime keynote slot at the Intel Capital conference this week (Intel bought Mobileye last year for a staggering $15bn). It is clear Shashua sees himself as a thought leader and in that respect he chose to tackle one of the biggest issues facing the autonomous car industry: safety.

Everyone is understandably anxious about machines driving cars around. Propelling a huge piece of metal around at great speeds very close to other people undeniably calls for a cautious approach.

GM

So when can you get in the first self-driving car? GM says 2019. Mobileye says 2021. Waymo says 2018 – yes, this year

READ MORE

Such is the anxiety that if an autonomous vehicle is involved in a crash it becomes headline news - which is a little ridiculous when you consider that there are more than 15,000 car accidents per day in the United States alone. But autonomous cars are coming and accidents are going to happen. And so Shashua wants to get ahead of it.

Most of his presentation was therefore taken up with this questions: how do we approach this question of safety? And how do we ensure that when accidents do happen – as they will – that each one won't be viewed as yet another piece of evidence that autonomous cars are dangerous?

As an engineer, he took an engineer's perspective on the problem: breaking it down to its constituent parts, finding a solution for each, and putting it all together.

Assertive

The first thing he did was make a persuasive case for why autonomous cars cannot be timid. He showed footage of a difficult stretch of busy road in Mobileye's home town of Jerusalem: one lane joins a larger road and then fades away. The road then splits.

It means drivers on the entry lane have no choice but to move over, while at the same time other cars on the larger road are trying to move across. It's the kind of intersection that every city has and all the locals gripe about.

Shashua points out that in this situation, there is literally no choice but to be assertive in your driving. If you aren't you will get stuck and create more problems as cars build up behind you. "No city would accept autonomous cars if they create traffic jams," he reasons.

Logically therefore, an autonomous car has to drive more like a human – with a degree of assertiveness. And with that notion accepted, you have to discard the idea of complete safety. In its place, he argues, what you need is a safety guarantee. That guarantee can't say "there will never be an accident" but it can say – and this is his core argument – "this car will never cause an accident."

At the same time, he argued, you have to address the issue of autonomous cars' "economic stability" – which is code for not being sued. If every time an autonomous car is in an accident there is a question over who is to blame, the manufacturer is unlikely to survive for very long.

Shashua then posited the question: how much safer does an autonomous car need to be than the average human driver? He immediately answered his question by saying a good rule of thumb is 1,000 times safer. Ok, that's hard to disagree with. That would be the equivalent of someone with the driving skills of Lewis Hamilton and the patience of an accountant.

He then does some math. Given that statistically there is one driver fatality per hundred million miles travelled and roughly one fatality per hour, that would mean to reach the required safety figure, autonomous cars would have to go through one billion hours of driving and cover 30 billion miles – which is, obviously, unfeasible if we want autonomous cars any time in the next decade.

Hmmm

Now, these figures are what we wrote down – it seemed a little hokey, it's possible we missed a part of the argument, but Shashua went through it fast and it's what we got. The effect however – at least in this reporter's mind - was to set off an early alarm bell.

The "big-figure, big-figure, big-figure, so that's why we have to do x" is a common way for engineers to push an argument past people - and it works. It's also a straw man argument: as far as we are aware no one has actually said that autonomous cars have to drive around for a billion hours before being allowed on the market.

As it turns out, strawman arguments are Shashua's preferred way of responding to any form of criticism, implied or otherwise. But more on that later.

The upshot of that first argument was that safety requirements should not be data driven. A company shouldn't have to prove it has driven x number of miles before it gets permission to sell an autonomous car. Even though that is precisely what autonomous car companies are effectively doing right now; it's just that it is their millions of miles that they are driving in order to test systems.

More importantly – and worryingly – the argument that data and safety shouldn't be matched suggests that it doesn't matter if autonomous cars are involved in lots of accidents in the coming years – even if they are statistically higher than human-driven vehicles - so long as the car did not cause the accident, according to the autonomous car owner's definition of what "cause" actually means.

That strikes us as a dangerously myopic approach to take.

Redundant

Shashua then talked briefly about what safety and redundancy means when it comes to an autonomous car, making the argument that an array of cameras around a car can provide full vision and awareness, and so they can act as the main sensors for an autonomous car, granting full visibility. Then the other sensors – radar and lidar – can act as "true redundancy" since they work on a different system.

Which, again, is an argument. But it just so happens that Mobileye's technologies and patents are based on camera technology so it is clearly a biased perspective. Later on, he argues that this camera-first approach means all autonomous vehicles should have no less than 12 cameras to cover all eventualities.

Others who are further ahead in autonomous car technology have a completely different perspective: General Motors CTO Jon Lauckner, for example, told us that he views lidar as a critical component of an autonomous car's primary sensor system, in large part because it sees things in 3D whereas cameras do not. Using lidar also means you don't need as many cameras, he says.

And then there's the fact that other companies are moving forward with new hybrid sensors that include both a camers and lidar in one, providing combined additional benefits (one of them – AEye is also at the conference claiming it can offer greater intelligence at a lower processing rate – it could be the future of autonomous car sensors.)

Mobileye's Shashua moves on to another argument: "No one has ever defined what is a dangerous situation and what you should do in a such a situation."

But since we are talking about machines here, and machines needs to work from rules, if we are to have machines driving cars, and if we are to have some kind of system that can apportion blame for an accident – or, more specifically from his perspective, prove that an autonomous car did the "right" thing – then we have to have definitions. Ok, that's a fair point.

Next page: Prediction

More about

TIP US OFF

Send us news


Other stories you might like