Feeds

Why can't a computer be more like a brain?

Or even better?

Internet Security Threat Report 2014

Why can't a computer be more like a brain? Jeff Hawkins asked at Emerging Technology this week.

Hawkins is most familiar as founder of Palm and Handspring and creator of the handwriting recognition system Graffiti. His other long-term interest is neuroscience, and he believes he has the answer to that My Fair Lady question.

Hawkins' starting point was the class of problems we have so far failed to build into successful machines: computer vision, adaptive behaviour, auditory perception, touch, languages, planning, and thinking.

There are four reasons commonly given why computers fail at these simple human tasks: they aren't powerful enough; brains are too complex to understand; brains work on some unknown principle we haven't yet discovered; brains are magic (they have souls).

Saying brains are too complex, Hawkins argued, just means we don't understand them. He believes, however, that he does understand enough. His latest company, Numenta, is attempting to build practical applications of the theories in his 2004 book On Intelligence.

For the purposes of this discussion, intelligence is the memory-based ability to predict the future, and resides in the neocortex, which wraps around the rest of the brain like a thin sheet. Its densely packed cells are organised into increasingly abstract hierarchical layers that build models of the world by exposure to changing sensory patterns.

Numenta copies this design in software it calls NuPIC (for Numenta Platform for Intelligent Computing), and reports some success in getting its program to recognise deformed versions of the pictures it already knows. Increasing the number of hierarchies should improve the complexity of the models it can handle. The key is these hierarchies.

The company is working on improving the system's predictive ability by adding higher-order temporal knowledge. It's not sure what the next step after that will be; for one thing it isn't sure which other parts of the brain it might need to model and integrate.

With the latest release, training the full network takes about 18 minutes. Each level of the network is trained on hundreds of thousands of iterations of the images – only a year ago this was taking days. Inference per image is down to about 10 milliseconds. A research version running on Linux or Mac OS X is available for free download and experimentation, along with white papers, learning algorithms, programmers' guides, and other documentation. A Windows version is in progress.

The company also still isn't sure what applications might find this approach valuable, though they list a wide range.

"It's like building the first computers," said Hawkins. "You knew it was an important idea, but you didn't have the CPU, compiler, or disk drive yet." He believes that: "We should be able to build machines that can become deeper experts than we are. I want to build machines that are really good thinkers."

Humans become experts by long study; a machine that could emulate that process could work at things that humans are physically unsuited for, such as the physics of the very small or very large. "I want a machine that inherently thinks about physics better than humans do." ®

Intelligent flash storage arrays

More from The Register

next story
Antarctic ice THICKER than first feared – penguin-bot boffins
Robo-sub scans freezing waters, rocks warming models
I'll be back (and forward): Hollywood's time travel tribulations
Quick, call the Time Cops to sort out this paradox!
Your PHONE is slowly KILLING YOU
Doctors find new Digitillnesses - 'text neck' and 'telepressure'
Reuse the Force, Luke: SpaceX's Elon Musk reveals X-WING designs
And a floating carrier for recyclable rockets
NASA launches new climate model at SC14
75 days of supercomputing later ...
Britain's HUMAN DNA-strewing Moon mission rakes in £200k
3 days, and Kickstarter moves lander 37% nearer takeoff
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
The total economic impact of Druva inSync
Examining the ROI enterprises may realize by implementing inSync, as they look to improve backup and recovery of endpoint data in a cost-effective manner.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Business security measures using SSL
Examines the major types of threats to information security that businesses face today and the techniques for mitigating those threats.