Feeds

If this doesn't terrify you... Google's computers OUTWIT their humans

'Deep learning' clusters crack coding problems their top engineers can't

Boost IT visibility and business value

Analysis Google no longer understands how its "deep learning" decision-making computer systems have made themselves so good at recognizing things in photos.

This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own.

The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting "deep learning" systems to work. (You find out more about machine learning, a computer science research topic, here [PDF].)

"Deep learning" involves large clusters of computers ingesting and automatically classifying data, such as things in pictures. Google uses the technology for services such as Android's voice-controlled search, image recognition, and Google translate.

The ad-slinger's deep learning experiments caused a stir in June 2012 when a front-page New York Times article revealed that after Google fed its "DistBelief" technology with millions of YouTube videos, the software had learned to recognize the key features of cats.

A feline detector may sound trivial, but it's the sort of digital brain-power needed to identify house numbers for Street View photos, individual faces on websites, or, say, <SKYNET DISCLAIMER> if Google ever needs to identify rebel human forces creeping through the smoking ruins of a bombed-out Silicon Valley </SKYNET DISCLAIMER>.

Google's deep-learning tech works in a hierarchical way, so the bottom-most layer of the neural network can detect changes in color in an image's pixels, and then the layer above may be able to use that to recognize certain types of edges. After adding successive analysis layers, different branches of the system can develop detection methods for faces, rocking chairs, computers, and so on.

What stunned Quoc V. Le is that the software has learned to pick out features in things like paper shredders that people can't easily spot – you've seen one shredder, you've seen them all, practically. But not so for Google's monster.

Learning "how to engineer features to recognize that that's a shredder – that's very complicated," he explained. "I spent a lot of thoughts on it and couldn't do it."

It started with a GIF: Image recognition paves way for greater things

Many of Quoc's pals had trouble identifying paper shredders when he showed them pictures of the machines, he said. The computer system has a greater success rate, and he isn't quite sure how he could write program to do this.

At this point in the presentation another Googler who was sitting next to our humble El Reg hack burst out laughing, gasping: "Wow."

"We had to rely on data to engineer the features for us, rather than engineer the features ourselves," Quoc explained.

This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This "thinking" is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.

Google doesn't expect its deep-learning systems to ever evolve into a full-blown emergent artificial intelligence, though. "[AI] just happens on its own? I'm too practical – we have to make it happen," the company's research chief Alfred Spector told us earlier this year.

Google's AI chief Peter Norvig believes the kinds of statistical data-heavy models used by Google represent the world's best hope to crack tough problems such as reliable speech recognition and understanding – a contentious opinion, and one that clashes with Noam Chomsky's view.

Deep learning is attractive to Google because it can solve problems the company's own researchers can't, and it can let the company hire fewer inefficient meatsacks human experts. And Google is known for hiring the best of the best.

By ceding advanced capabilities to its machines, Google can save on human headcount, better grow its systems to deal with a data deluge, and develop capabilities that have – so far – befuddled engineers.

The advertising giant has pioneered a similar approach of delegating certain decisions and decision-making selection systems with its Borg and Omega cluster managers, which seem to behave like "living things" in how they allocate workloads.

Given Google's ambition to "organize the world's information", the fewer people it needs to employ, the better. By developing these "deep learning" systems Google needs to employ fewer human experts, Quoc, said.

"Machine learning can be difficult because it turns out that even though in theory you could use logistic regression and so on, but in practice what happens is we spend a lot of time on data processing inventing features and so on. For every single problem we have to hire domain experts," he added.

"We want to move beyond that ... there are certainly problems we can't engineer features of and we want machines to do that."

By working hard to give its machines greater capabilities, and local, limited intelligence, Google can crack classification problems that its human experts can't solve. Skynet? No. Rise of the savant-like machines? Yes. But for now the relationship is, thankfully, cooperative. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.