Original URL: https://www.theregister.com/2010/09/16/vision_superchip/

Custom superchippery pulls 3D from 2D images like humans

Self-driving cars, 360°-vision hats ... much better CCTV

By Lewis Page

Posted in HPC, 16th September 2010 09:21 GMT

Computing brainboxes believe they have found a method which would allow robotic systems to perceive the 3D world around them by analysing 2D images as the human brain does - which would, among other things, allow the affordable development of cars able to drive themselves safely.

For a normal computer, or even a normal supercomputer, analysing 2D images of fast-moving traffic as quickly as a human does it is a massive task requiring colossal resources. For instance the Roadrunner hypercomputer, third most powerful in the world according to the latest rankings, is thought by boffins in the know to perhaps be capable of handling a car - though it would have do so by remote hookup, as it weighs more than 200 tonnes and requires three megawatts of power. This is obviously not going to be a mass-market solution.

But yesterday at the High Performance Embedded Computing (HPEC) workshop in Boston, engineers presented a new system dubbed "Neuflow". This uses custom hardware modelled on the brain's visual-processing centres, all built on a single chip. Its designers say that it can process megapixel images and extract 3D information from them in real time.

Not only can a Neuflow system, according to its inventors, process imagery at blistering speed: it is also small and economical of power.

"The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places," says Eugenio Culurciello of Yale uni's engineering department. Apparently such a Neuflow "convolutional neural network" machine would require only a few watts of power - it might, in fact, be a viable portable or wearable solution as well as vehicle-mounted.

This would mean that a robot car equipped with simple cameras could perceive the road, buildings, other cars and pedestrians in 3D: there would be no need for the expensive arrays of close-in laser scanner systems generally used on autonomous-car prototypes today.

Still smaller devices might be possible, perhaps allowing a soldier's helmet to watch all around him and pick out movement or threats: or permitting smaller robots to get about inside buildings or other cluttered environments without constant remote control from a human operator.

Needless to say, Neuflow tech could also hugely enhance the effectiveness of CCTV and similar surveillance systems. At present these are generally used reactively, well after a given event, and analysing their results eats up thousands of man-hours. Computers which could process images into moving objects would potentially be able to automate much of this and speed it up.

There's more here on convolutional neural networks for those interested, including code downloads and other goodies. ®