This article is more than 1 year old

Is that you, HAL? No, it's NEIL: Google, US Navy pour money into 'associative' AI brain

No sleep. No food. Just 4 months of internet ... poor thing

NEIL hasn't slept or eaten in four months, it's just browsed the internet and tried to figure out connections between aircraft and aircraft carriers, or hot dogs and buns.

The Never Ending Image Learner is a new approach to weak artificial intelligence systems that piggybacks on the immense tech fielded by companies like Google, and represents the bleeding edge of computer science research.

The system takes in batches of classified images (cars parked outside, for instance), and then tries to find other classified elements within them (such as the surrounding road), then crunches the data and forms associations.

So far it has sucked in three million images, and managed to identity 1,500 objects and 1,200 scenes in half a million pictures, then figured out some 2,500 associations from this.

Some of the things it has already "learnt" include the fact an Airbus 330 airplane can have a part called an airplane nose, or that Zebras can be found in the savanna, or that a trading floor can be crowded with people.

Though these are all (hopefully) obvious to humans, the fact the computer has come up with these associations on its own illustrates just how good deep-learning systems are getting, and how effective they may become in the future.

"It's building upon a lot of work in computer vision using deformable part models," Carnegie Mellon University assistant research professor Abhinav Gupta, told The Reg on Monday.

The technology runs on a 200-core cluster and hoovers up internet data to create an ever-increasing web of classifications and associations to help it "understand" the content of the internet.

"NEIL is a constrained semi-supervised learning (SSL) system that exploits the big scale of visual data to automatically extract common sense relationships and then uses these relationships to label visual instances of existing categories," the researchers wrote in their academic paper [PDF] describing the tech.

Just as Google's deep-learning systems are designed to automatically recognize and classify things in images, NEIL has been built to divine the common associations of objects automatically.

Not coincidentally, the tech is funded by two entities that have a keen interest in spurring the development of intelligent hunter-killer image recognition and reasoning systems: ad-slinger Google and boffinry bankroller the US Office of Naval Research.

In fact, to take the system to the next level, the researchers need two things that benefactor Google could easily provide: more compute resources, and closer integration with the web giant's image recognition systems.

"One of the restrictions of our system is Google has restrictions on amount of images you can download from them," Gupta said.

The researchers also need to have humans intervene to train the software so that NEIL doesn't develop a stronger association between "pink" and the eponymous pop star, rather than the color.

Though the problem of precisely identifying stuff in images is "nowhere near being solved", the tech has got good enough that systems like NEIL can piggyback on top of it and still more or less work, Gupta said.

"I think we have reached a stage where we can try to do associative learning. I think this is a long-term problem with some short-term gains we can see. Short term – we can understand data faster and better, your individual models are also becoming better. These reasoning models can start to come into our system."

But as with most modern AI problems, more data is needed for this to get better. "We might need millions of relationships before things are easy to work," Gupta says. "This is the first scratch of the surface."

The amount of funding both the ONR and Google have poured into NEIL is undisclosed. Google did not respond to a request for further information. ®

More about

TIP US OFF

Send us news


Other stories you might like