Original URL: http://www.theregister.co.uk/2007/06/28/softky_robots_part_two/

Robot brains? Can't make 'em, can't sell 'em

Why dopes still beat boffins

By Bill Softky

Posted in Science, 28th June 2007 09:02 GMT

The current generation of "consumer robots" is driven mostly by robot-love: people enjoy things which move around on their own, especially if they can build or tinker with the gadgets themselves.   That much became clear at a recent symposium on Robots, which I described here last month. The consumer robot business today is manned by avid tinkerers because there is neither a technology for autonomous gadgets, nor a business model to support them even if they did exist.

Robot bacterium?

At the symposium, your reporter posed the following question to the panel:

"The three commercial presenters offer consumer products with pre-programmed behaviors about equal to those of a bacterium.  The lone researcher demonstrated fancier computer vision, but it took a dozen graduate students a year to develop, and is still extremely simple and pre-programmed. When can we expect our robots to have the sophistication, responsiveness, and robustness of - say - a mouse?"

No one answered the question, of course, but the most enlightening response came from Colin Angle, CEO of iRobot (which manufactures the autonomous vacuum-cleaning Roomba):

"The Roomba is actually very sophisticated: it has a multi-threaded operating system, and was built by over a hundred computer scientist and a dozen PhDs," he replied. 

He's right, of course. The Roomba really is a sophisticated piece of computer engineering - but sophistication by computer standards does not translate to biological sophistication.  I was tempted to respond that  bacteria are also multi-threaded - they can grow and eat and reproduce and move all at the same time, too.  Unfortunately, Angle's PhDs have the unenviable task of reproducing in silicon what Nature has spent a billion years on. 

iRobot's Roomba is a great example of how very hard real-life robotics is.  The task for the disk-shaped rolling vacuum seems simple:  roam around a room, vacuum up dirt, and come back to the dock in time to recharge.  But to accomplish that task, the Roomba needs infra-red locators and "virtual walls" spread around the room to keep it from getting lost elsewhere in the house.

Perhaps the hardest task is to avoid "getting stuck": not just physically getting wedged somewhere, but  running in circles or vacuuming the same region over and over.  Merely detecting "stuck-ness" from its sensor data required vast amounts of trial-and-error programming, as did delineating how to recover.  Meanwhile, the iRobot corporation has been obliged to simplify the hardware mercilessly, so that the whole package of motors/wheels/vacuum/software is affordable - say below $200 - an economising which leaves little room to develop sophisticated planning and "intelligence."

Moore's Law for gears

Angle's clever lament on the business of building such gadgets - "Moore's law doesn't apply to gears" - masks a deeper truth.  What he means is that mechanical or hardware costs have not dropped as fast as chips, memory, and bandwidth, so that the robotic "industry" has not had the same exponential growth as communications and computation. He could also mean that selling physical gadgets entails much more than simply assembling them; it means repairing them and offering warrantees (an obligation that click-wrap software has wriggled out of), and even ensuring the safety of customers from potential robots-run-amok.

The truth he didn't mention is that hardware is not the reason we have no intelligent robots. In fact motors, sensors and even processors are very cheap now, and a desktop computer core with a video input and a few motorized wheels could be mass-produced for a few hundred dollars. But the software to animate it is quite literally priceless, because it doesn't yet exist. Worse, no one even knows the principles on which to write it.

Here's why.

Missing the basics

Of course people can write software specialized for specific hardware to to a specific task (like the Roomba), but such programs won't generalize to new hardware, sensors, and environments: no one yet has software which "learns" the way brains do, mostly because science doesn't even know what brains do. If we don't understand how we (or even mice) interact gracefully with an uncertain world, how could we expect to program anything else to?

At every level, even specialists lack conceptual clarity.Let's look at a few examples taken from current academic debates.

We lack a common mathematical language for generic sensory input - tactile, video, rangefinder - which could represent any kind of signal or mixed-up combination of signals. Vectors? Correlations? Templates?

Imagine this example. If one were to plot every picture from a live video-feed as a single "point" in a high-dimensional space, a day's worth of images would be like a galaxy of stars. But what shape would that galaxy have: a blob, a disk, a set of blobs, several parallel threads, donuts or pretzels? At the point scientists don't even know the structure in real-world data, much less the best ways to infer those structures from incomplete inputs, and to represent them compactly.

And once we do know what kind of galaxies we're looking for, how should we measure the similarity or difference between two example signals, or two patterns? Is this "metric" squared-error, bit-wise, or probablistic?

Well, in real galaxies, you measure the distance between stars by the usual Pythagorean formula. But in comparing binary numbers, one typically counts the number of different bits (which is like leaving out Pythagorus' square root). If the stars represented probabilities, the comparisons would involve division rather than subtraction, and would probably contain logarithms. Choose the wrong formula, and the algorithm will learn useless features of the input noise, or will be unable to detect the right patterns.

There's more: the stars in our video-feed galaxy are strung together in time like pearls on a string,in sequence. but we don't know what kind of (generic) patterns to look for among those stars -linear correlations, data-point clusters, discrete sequences, trends?

Perhaps every time one image ("star") appears, a specific different one follows, like a black car moving from left to right in a picture. Or maybe one of two different ones followed, as if the car might be moving right or left. But if the car is black, or smaller (two very different images!), would we still be able to use what we learned about large black moving cars? Or would we need to learn the laws of motion afresh for every possible set of pixels?

The problems don't end there. We don't know how to learn from mistakes in pattern-detection, to incorporate errors on-the-fly. Nor do we know how to assemble small pattern-detection modules into usefully large systems. Then there's the question of how to construct or evaluate plans of action or even simple combinations of movements for the robot.

Academics are also riven by the basic question of whether self-learning systems should ignore surprising input, or actively seek it out? Should the robot be as stable as possible, or as hyper-sensitive as possible?

If signal-processing boffins can't even agree on basic issues like these, how is Joe Tinkerer to create an autonomous robot himself? Must he still specify exactly how many pixels to count in detecting a wall, or how many degrees to rotate each wheel? Even elementary motion-detection - "Am I going right or left?" - is way beyond the software or mathematical prowess of most homebrew roboticists.

Calling engineer Einstein!

So the tinkerers can't do the math, and the boffins can't tinker. To break that logjam we need an Einstein of engineering. He would be part hacker, part statistician: a special blend  of mathematical genius, programmer, and tinkerer.

And hopefully a businessman too.

Unique among technologies, robotics faces an insidious competition: live human beings. Almost every other revolutionary technology - steam engines, air travel, telephones, computers - accelerated crazily as it became better and better at doing what no human being could do, so even the earliest prototypes offered commercial benefits and attracted customers, reinvestment, and iterative improvement. The earliest trains, while expensive, nevertheless moved faster than horses: and that was enough to unleash the investment.  But robotic brainpower is different, because it competes with human brainpower; the "robotish-ness" is precisely what humans are better at. The dumbest human still sees, hears, and grasps better than the most expensive robot.

A similar chicken-and-egg predicament long stymied solar energy: large-scale investment made little economic sense while oil, coal, and hydro power were  much cheaper.  Solar ultimately succeeded in niche markets where it didn't compete with the mains;  autonomous robotics, likewise, needs a business application with no hope of human intervention.

Perhaps some applications are on the way.  The long-term goal of Stanford Professor Sebastian Thrun, designer of a prize-winning robot car, is a self-driving car which will save humans the trouble of keeping their own eyes glued to the road for hours a day. Such robot chauffeurs would form a great business, but they are still at least a decade off. While today, they are possible only because the technology is specifically tuned to the narrow task of road-driving with lasers, radar, GPS, and other purpose-built sensors. A robot chauffeur would not have a robot brain.

One high-profile businessman is working on real robot brains: Jeff Hawkins, founder of Palm Computing, hopes his new venture Numenta Inc will spur a business based on automatic, self-learning systems.  His system isn't robotic yet, but he champions a modular software architecture and generic API templates for coders and customers, so even if the initial algorithm sputters, it could be iteratively improved without redesigning all the infrastructure.

Such interfaces are the best news in an otherwise stagnant field. Toymaker Lego is in cahoots with Microsoft, vacuum-maker iRobot is creating an open robotics platform,  and the general trend is for standard drivers, modular programming modules, and interlocking parts. Once the algorithms are equally modularized, perhaps a new generation of mini-Einsteins will build a prototype or discover a business application  that others can imitate and improve upon. Then we might finally have real robots in place of promised ones. ®

Bill Softky has written a neat utility for Excel power users called FlowSheet: it turns cryptic formulae like "SUM(A4:A7)/D5" into pretty, intuitive diagrams. It's free, for now. Check it out.