Feeds

Microsoft's new 'Adam' AI trounces Google ... and beats HUMANS

Asynchronous design crucial in giving 120-machine cluster the edge in neural-network wars

Intelligent flash storage arrays

The battle for neural-network dominance has heated up as Microsoft has developed a cutting-edge image-recognition system that has trounced a system from Google.

The company revealed "Project Adam" on Monday and claimed that the system is fifty times faster and roughly twice as accurate as Google's own DistBelief system.

In one experiment on the ImageNet benchmark, Project Adam was able to correctly sort millions of input images into around 22,000 categories 29.8 percent of the time, versus around 15.8 percent for Google's system and around 20 percent for a typical human.

Project Adam is a weak artificial intelligence system that Microsoft researchers use to process and categorize large amounts of data. Though it has so far been tested on its ability to recognize traits of images, it would work just as well for learning to tell the difference between different bits of text and audio, Microsoft said.

With Project Adam, Microsoft has figured out how to get a powerful learning algorithm to run on lots and lots of computers that are each crunching numbers at different speeds. Put more technically, Adam "is a distributed implementation of stochastic gradient descent," explained Microsoft researcher Trishul Chilimbi in a chat with El Reg.

Though Project Adam uses the same type of learning algorithm as that pushed by Google – "the fundamental training algorithms to train these networks, they're not really new, they're from the 80s," Chilimbi notes – it does so using fewer computers that have been tied together in a more efficient way.

This is because Project Adam is built around asynchronous performance characteristics. The surprising thing was that the asynchronous traits may have given it better performance.

"We hypothesize that this accuracy improvement arises from the asynchronous in Adam which adds a form of stochastic noise while training that helps the models generalize better when presented with unseen data," Microsoft's researchers write in a paper describing the tech and seen by The Register. The paper is still in review, and not public.

By building Adam around asynchronous properties, some parts of the system are occasionally given unexpected bits of data, which it then has to train and optimize against. In the same way that spicing up a boring work day in front of a computer with something non-work related, like a sudden bout of creative swearing or perhaps going to a window and leering at pedestrians on the street below can give a useful jolt to our own grey matter, Project Adam is able to learn more efficiently by sometimes being given out-of-order data.

The asynchronous approach "allows you to jump out of unstable local minima to local minimas that are better," Chilimbi told us.

"Say I'm in a small submersible at the bottom of the ocean and trying to find the deepest point and have very limited visibility around me. If I go in some ridge somewhere and get stuck and look around I think I'm in the deepest spot.

"Now, say, I also have some kind of propulsion system which allows me to jump out of some of these deep things that are not super, super deep, this gives me an opportunity if I jump out of some of these things as a way to find other things significantly deeper."

As for the future, it's likely Microsoft will work to get Project Adam integrated into Microsoft products, just as Google has done with its own image recognition.

"While we have implemented and evaluated Adam using a 120 machine cluster, the scaling results indicate that much larger systems can likely be effectively utilized for training large Deep Neural Networks (DNNs)," the researchers wrote.

Though there's lots more to be done on DNNs, like lashing multiple datatypes together to create systems that develop representations of both image and word concepts and tie them together, Chilimbi admitted that there are some things Project Adam lacks.

In the far future, other areas of AI research are likely to include "more temporal data, much more associative memory," he said – which happens to be the exact area being worked on by former Palm chief and now renegade neuroscientist Jeff Hawkins. ®

Intelligent flash storage arrays

More from The Register

next story
MARS NEEDS WOMEN, claims NASA pseudo 'naut: They eat less
'Some might find this idea offensive' boffin admits
Boffins who stare at goats: I do believe they’re SHRINKING
Alpine chamois being squashed by global warming
LOHAN crash lands on CNN
Overflies Die Welt en route to lively US news vid
Comet Siding Spring revealed as flying molehill
Hiding from this space pimple isn't going to do humanity's reputation any good
Experts brand LOHAN's squeaky-clean box
Phytosanitary treatment renders Vulture 2 crate fit for export
No sail: NASA spikes Sunjammer
'Solar sail' demonstrator project binned
Carry On Cosmonaut: Willful Child is a poor taste Star Trek parody
Cringeworthy, crude and crass jokes abound in Steven Erikson’s sci-fi debut
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.