Feeds

Robot brains? Can't make 'em, can't sell 'em

Why dopes still beat boffins

Security for virtualized datacentres

At every level, even specialists lack conceptual clarity.Let's look at a few examples taken from current academic debates.

We lack a common mathematical language for generic sensory input - tactile, video, rangefinder - which could represent any kind of signal or mixed-up combination of signals. Vectors? Correlations? Templates?

Imagine this example. If one were to plot every picture from a live video-feed as a single "point" in a high-dimensional space, a day's worth of images would be like a galaxy of stars. But what shape would that galaxy have: a blob, a disk, a set of blobs, several parallel threads, donuts or pretzels? At the point scientists don't even know the structure in real-world data, much less the best ways to infer those structures from incomplete inputs, and to represent them compactly.

And once we do know what kind of galaxies we're looking for, how should we measure the similarity or difference between two example signals, or two patterns? Is this "metric" squared-error, bit-wise, or probablistic?

Well, in real galaxies, you measure the distance between stars by the usual Pythagorean formula. But in comparing binary numbers, one typically counts the number of different bits (which is like leaving out Pythagorus' square root). If the stars represented probabilities, the comparisons would involve division rather than subtraction, and would probably contain logarithms. Choose the wrong formula, and the algorithm will learn useless features of the input noise, or will be unable to detect the right patterns.

There's more: the stars in our video-feed galaxy are strung together in time like pearls on a string,in sequence. but we don't know what kind of (generic) patterns to look for among those stars -linear correlations, data-point clusters, discrete sequences, trends?

Perhaps every time one image ("star") appears, a specific different one follows, like a black car moving from left to right in a picture. Or maybe one of two different ones followed, as if the car might be moving right or left. But if the car is black, or smaller (two very different images!), would we still be able to use what we learned about large black moving cars? Or would we need to learn the laws of motion afresh for every possible set of pixels?

The problems don't end there. We don't know how to learn from mistakes in pattern-detection, to incorporate errors on-the-fly. Nor do we know how to assemble small pattern-detection modules into usefully large systems. Then there's the question of how to construct or evaluate plans of action or even simple combinations of movements for the robot.

Academics are also riven by the basic question of whether self-learning systems should ignore surprising input, or actively seek it out? Should the robot be as stable as possible, or as hyper-sensitive as possible?

If signal-processing boffins can't even agree on basic issues like these, how is Joe Tinkerer to create an autonomous robot himself? Must he still specify exactly how many pixels to count in detecting a wall, or how many degrees to rotate each wheel? Even elementary motion-detection - "Am I going right or left?" - is way beyond the software or mathematical prowess of most homebrew roboticists.

Security for virtualized datacentres

More from The Register

next story
Boffins who stare at goats: I do believe they’re SHRINKING
Alpine chamois being squashed by global warming
What's that STINK? Rosetta probe shoves nose under comet's tail
Rotten eggs, horse dung and almonds – yuck
Comet Siding Spring revealed as flying molehill
Hiding from this space pimple isn't going to do humanity's reputation any good
Kip Thorne explains how he created the black hole for Interstellar
Movie special effects project spawns academic papers on gravitational lensing
Experts brand LOHAN's squeaky-clean box
Phytosanitary treatment renders Vulture 2 crate fit for export
LONG ARM of the SAUR: Brachially gifted dino bone conundrum solved
Deinocheirus mirificus was a bit of a knuckle dragger
Moment of truth for LOHAN's servos: Our US allies are poised for final test flight
Will Vulture 2 freeze at altitude? Edge Research Lab to find out
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
New hybrid storage solutions
Tackling data challenges through emerging hybrid storage solutions that enable optimum database performance whilst managing costs and increasingly large data stores.