Software engineers – the ultimate brain scientists?
Part I: Everything you know about AI is probably wrong
Guest Opinion Bill Softky is a scientist at the Redwood Neuroscience Institute, founded by Jeff Hawkins. He has worked as a software architect, visualization designer, and educator.
Can software engineers hope to create a digital brain? Not before understanding how the brain works, and that's one of the biggest mysteries left in science. Brains are hugely intricate circuits of billions of elements. We each have one very close by, but can't open it up: it's the ultimate Black Box.
The most famous engineering brain models are "Neural Networks" and "Parallel Distributed Processing." Unfortunately both have failed as engineering models and as brain models, because they make certain assumptions about what a brain should look like.
The trouble is, real problems such as robotic motion and planning, audio or visual segmentation, or real-time speech recognition are not yet well enough understood to justify any particular circuit design, much less "neural" ones. A neuron is the brain's computational building block, its "transistor". So the "neurons" and "networks" of those models are idealized fantasies designed to support mathematically elegant theories, and have not helped to explain real brains.
There is an abundance of research on brains, discovering which areas light up when you solve certain problems, which chemicals are outside and inside neurons, what drugs change one?s moods, all amounting to thousands of research papers, thousands of researchers.
But there remain two huge mysteries: what the brain's neural circuit really is, and what it does.
Here's what we already know about brain circuitry.
We know that neurons are little tree-shaped cells with tiny branches gathering electrochemical input, and a relatively long output cable which connects to other neurons. Neurons are packed cheek-to-jowl in the brain: imagine a rainforest in which the trees and vines are so close their trunks all touch, and their branches are completely intertwined. A big spaghetti mess, with each neuron sending to and receiving from thousands of others.
It's not a total hash; like a rainforest, that neural tangle has several distinct layers. And fortunately, those layers look pretty much the same everywhere in the main part of the brain, so there is hope that whatever one layered circuit does with its inputs (say, visual signals), other layers elsewhere may be doing something similar with their inputs.
OK, so we know what a brain circuit looks like, in the same sense that we know what a CPU layout looks like. But for several reasons, we don't know how it works,.
First, we don't know how a single neuron works. Sure, we know that in general a neuron produces more output pulses when it gets more inputs. But we don't know crucial details: depending on dozens of (mostly unknown) electrochemical properties, that neuron might be adding up its inputs, or multiplying them, or responding to averages, or responding to sudden changes. Or maybe doing one function at some times, on some of its branches, and other functions on other branches or at other times. For now, we can't measure brain neurons well enough to more than guess their input/output behavior.
Second, we can't tell how the neurons are connected. Sure, neurons are connected to neighboring neurons. But that isn't very helpful. It's like saying that chips in a computer are connected to neighboring chips. It doesn't explain the specific circuitry. The best biologists can do is trace connections between handfuls of neurons at a time in a dead brain, and if they're lucky, they can even record the simultaneous outputs from a handful of neurons in a live brain. But all the interesting neural circuits contain thousands to millions of neurons, so measuring just a few is hopelessly inadequate, like trying to understand a CPU by measuring the connections between - or the voltages on - a few random transistors.
Third, we don't understand neurons' electrical code. We do know that neurons communicate by brief pulses, and that the pulses from any one neuron occur at unpredictable times. But is that unpredictability a random noise, like the crackle of static, or a richly-encoded signal, like the crackle of a modem? Must the recipient neurons average over many input pulses, or does each separate pulse carry some precise timing?
Finally, we don't know how brains learn. We're pretty sure that learning mostly involves the changes in connections between neurons, and those connections form and strengthen based on local voltages and chemicals. But it's devilishly hard even to record from two interconnected neurons, much less watch the connection change while knowing or controlling everything affecting it. And what about the factors which create brand-new connections, or kill off old ones? Those circuit changes are even stronger, yet nearly impossible to measure.
So here's what we don't know about brain circuitry: we don't know what a single neuron does, what code they use, how they are connected, or how the connections change with learning. Without such knowledge, we can't reverse-engineering brains to deduce their function from their structure.
But what about "forward engineering" What about starting with the problem specification - what brains do - and saving the circuitry for later?
Again, we are overwhelmed with detail. We know a lot about what specific neurons do when exposed to specific sensory inputs. For example, we know that some brain neurons respond to small contours of light, some to small bits of motion, some to certain shapes, some to colors, and some to faces, and there are dozens of similar responses in the visual system alone. Likewise in sound: some neurons respond to chirps, some to hisses, some to tones, some to sounds suddenly starting or stopping. There are thousands of research papers detailing more specific neuron functions than you could ever want to know.
But two insights are missing from this mass of detail.
First, those hard-won neuronal recordings are not of brains doing what they usually do: interpreting and interacting with the real world. These recorded brains are instead exposed to highly artificial, constrained stimuli, chosen specifically to make a few neurons active enough to be measured. The dirty secret of neurophysiology is that under normal circumstances - viewing ordinary scenes, listening to ordinary sounds - neurons don't fire very much at all, and when the do fire, the cause is mysterious. That near-silence doesn't make interesting research papers, so scientists need to impose striking circumstances - like flashing high-contrast shapes at an animal in a darkened room - in order to make a neuron do anything measurable. If you want clear data, you have to give the animal some very weird inputs.
The second problem with all this neural data is that it comes from mature neurons which have already learned, somehow, to do whatever they do. But neurons aren't hard-wired: presumably, growing up with different inputs would have spawned different connections, teaching that neuron to produce a different response. In fact, it seems as if exposure to visual input makes a neuron learn a typical visual response, but exposure to auditory input makes it learn a typical hearing response. So we know something about what the responses are, but not why they got that way.
Grand theories to the rescue
So, despite copious data, we have no idea of how the brain circuit works, how it learns, or why its pieces do what they do. Fortunately, there is one avenue left to make sense of this, and it isn?t hamstrung by the difficulty of measuring tiny, intertwined cells in live animals.
The huge missing piece is a theory of what a brain ought to do. Think of a human brain as a black box, having about a million inputs (sensory nerves) and half a million outputs (to muscles). You can think of the inputs as TV-pixel or mechanical sensor signals, and the outputs as driving little motors or pistons. At a minimum, the black box needs some formulae by which it can discover patterns in the inputs, and can create useful patterns of outputs.
We know that input from the outside world has lots of patterns and regularity. For example, pixels are clumped into contours, moving objects, shadows. And output to the muscles need to be patterned - coordinated - like the specific contractions of walking, grasping, or throwing. But suppose you needed to program the black box to discover those input and output patterns on its own, from experience. What would you do? If you had a staff of a thousand programmers, what would you tell them to program?
Nobody knows the answer, but in the concluding part, we'll look at some of the tricks that are probably involved. ®