This article is more than 1 year old

New algorithm could help self-driving cars scout out hidden objects

If they are going v-e-r-y slowly

A team of engineers has developed algorithms that reconstruct images of objects hiding around corners or behind walls, and they believe it could be used to help make self-driving cars safer one day - albeit very slow ones at present.

The researchers call this “non-line-of-sight imaging” (NLOS) and have described the technique in a paper published in Nature on Monday.

First, they fire a laser pulse at a beam splitter and it passes through a scanning galvanometer containing a rotary motor, which makes sure the light scans 360 degrees along its line of sight.

The ray hits a wall and bounces onto a hidden object, where it will reflect back to the wall and into a lens, where it is focused onto a light detecting sensor. A complete scan of an environment can take anywhere from two minutes to an hour, depending on the lighting conditions or how well the object reflects light.

By measuring the time and location of where the photon arrives at the detector, an algorithm can begin to retrace the paths of the photons and build an image of the object hidden behind the wall.

David Lindell, co-author of the paper and a graduate student at Stanford University, explained to The Register that the system measures the time it takes for the reflected photons from the surface of the object to return to the detector.

“We repeat this procedure for many different points on the visible surface. After aggregating these timing measurements from all the different scan positions, we apply a mathematical transform to the measurements which enables us to reconstruct the object in a very computationally efficient way. The key insight is that the measurements can be transformed into a domain where standard image processing techniques can be used to efficiently recover the hidden object.”

LiDAR systems currently used in testing self-driving cars work in a similar manner, rotating a laser light and constructing images from the reflected light ricocheting off objects. But it’s difficult to use NLOS imaging because it’s computationally demanding to process the repeated light scans in time to produce an image quickly enough for it to be applicable in the real world.

But Matthew O’Toole, a postdoctoral scholar also at Stanford University, said: "We believe the computation algorithm is already ready for LiDAR systems. The key question is if the current hardware of LIDAR systems supports this type of imaging."

Lindell added that hardware would need to make sure that the laser and detector should be “optically aligned” so that they both observe the same light points as the LiDAR scans its environment.

There are also other difficulties faced when using this with a LiDAR system for a self-driving car. “The biggest challenge is that only a very small fraction of the light emitted from the laser actually returns after bouncing off the hidden object from around the corner. For autonomous vehicles, we need to decrease the time we are collecting this light from minutes down to seconds and operate the system on a moving platform under bright sunlight,” Lindell said.

If more light is reflected, the algorithms can recreate images at a higher quality. Since different objects have various reflectivity levels, it means it’ll be harder to see objects hidden by trees compared to metal lamp posts.

The team hope to improve their system, and said they have already started working to apply NLOS imaging with commercial LiDAR systems. ®

More about

TIP US OFF

Send us news


Other stories you might like