Feeds

Robot brains? Can't make 'em, can't sell 'em

Why dopes still beat boffins

Secure remote control for conventional and virtual desktops

At every level, even specialists lack conceptual clarity.Let's look at a few examples taken from current academic debates.

We lack a common mathematical language for generic sensory input - tactile, video, rangefinder - which could represent any kind of signal or mixed-up combination of signals. Vectors? Correlations? Templates?

Imagine this example. If one were to plot every picture from a live video-feed as a single "point" in a high-dimensional space, a day's worth of images would be like a galaxy of stars. But what shape would that galaxy have: a blob, a disk, a set of blobs, several parallel threads, donuts or pretzels? At the point scientists don't even know the structure in real-world data, much less the best ways to infer those structures from incomplete inputs, and to represent them compactly.

And once we do know what kind of galaxies we're looking for, how should we measure the similarity or difference between two example signals, or two patterns? Is this "metric" squared-error, bit-wise, or probablistic?

Well, in real galaxies, you measure the distance between stars by the usual Pythagorean formula. But in comparing binary numbers, one typically counts the number of different bits (which is like leaving out Pythagorus' square root). If the stars represented probabilities, the comparisons would involve division rather than subtraction, and would probably contain logarithms. Choose the wrong formula, and the algorithm will learn useless features of the input noise, or will be unable to detect the right patterns.

There's more: the stars in our video-feed galaxy are strung together in time like pearls on a string,in sequence. but we don't know what kind of (generic) patterns to look for among those stars -linear correlations, data-point clusters, discrete sequences, trends?

Perhaps every time one image ("star") appears, a specific different one follows, like a black car moving from left to right in a picture. Or maybe one of two different ones followed, as if the car might be moving right or left. But if the car is black, or smaller (two very different images!), would we still be able to use what we learned about large black moving cars? Or would we need to learn the laws of motion afresh for every possible set of pixels?

The problems don't end there. We don't know how to learn from mistakes in pattern-detection, to incorporate errors on-the-fly. Nor do we know how to assemble small pattern-detection modules into usefully large systems. Then there's the question of how to construct or evaluate plans of action or even simple combinations of movements for the robot.

Academics are also riven by the basic question of whether self-learning systems should ignore surprising input, or actively seek it out? Should the robot be as stable as possible, or as hyper-sensitive as possible?

If signal-processing boffins can't even agree on basic issues like these, how is Joe Tinkerer to create an autonomous robot himself? Must he still specify exactly how many pixels to count in detecting a wall, or how many degrees to rotate each wheel? Even elementary motion-detection - "Am I going right or left?" - is way beyond the software or mathematical prowess of most homebrew roboticists.

Intelligent flash storage arrays

More from The Register

next story
Bond villains lament as Wicked Lasers withdraw death ray
Want to arm that shark? Better get in there quick
Renewable energy 'simply WON'T WORK': Top Google engineers
Windmills, solar, tidal - all a 'false hope', say Stanford PhDs
SEX BEAST SEALS may be egging each other on to ATTACK PENGUINS
Boffin: 'I think the behaviour is increasing in frequency'
Reuse the Force, Luke: SpaceX's Elon Musk reveals X-WING designs
And a floating carrier for recyclable rockets
Antarctic ice THICKER than first feared – penguin-bot boffins
Robo-sub scans freezing waters, rocks warming models
The next big thing in medical science: POO TRANSPLANTS
Your brother's gonna die, kid, unless we can give him your, well ...
NASA launches new climate model at SC14
75 days of supercomputing later ...
Britain's HUMAN DNA-strewing Moon mission rakes in £200k
3 days, and Kickstarter moves lander 37% nearer takeoff
Your PHONE is slowly KILLING YOU
Doctors find new Digitillnesses - 'text neck' and 'telepressure'
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
10 ways wire data helps conquer IT complexity
IT teams can automatically detect problems across the IT environment, spot data theft, select unique pieces of transaction payloads to send to a data source, and more.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?