Aye, AI: Cambridge's Dr Sean Holden talks to El Reg about our robot overlords

They're not our overlords: They're here already

2001: A Space Odyssey

Interview Fatigued by bluster about the implementations of yesteryear's machine learning algorithms, we spoke to Dr Sean Holden of the University of Cambridge to enthuse ourselves again.

Holden, a senior lecturer in machine learning at Cambridge's Computer Laboratory as well as being a fellow of Trinity College where he shepherds along students as a director of studies, told The Register he doesn't buy into the Skynet-panic – but remains interested in the field.

“Clearly there have been some recent big steps in the subject, but there have been similar big steps throughout the history of artificial intelligence – which isn't that long – and I think this the same as we've had previously.” Holden said.

The senior lecturer explained: “There's been a step forward in a particular competence, but you're going to need an awful lot of those to get to a human functionality. There's a big, big gap between where we are with AI, and where humans are, and that much hasn't changed.”

Artificial intelligence, as a term, is believed to have been coined in 1956 by John McCarthy, who won the Turing Award 16 years later for his contributions to the field. Since its inception, that field has been predicated upon investigating whether a machine may accurately simulate the intellectual functions of a human.

The Association for Computing Machinery's biography of McCarthy noted that his work “has emphasized epistemological problems—the problems of what information and what modes of reasoning are required for intelligent behavior.”

You can't measure AI from 0 to 100 and say we're currently at 22. It's nothing like that straightforward.

Holden did not think there was a single definition of what intelligence was, however, nor a means of quantifying the sophistication of machines and comparing it to human sapience: “Human intelligence has been quantifed for different purposes clearly, as intelligence testing shows – you can get an IQ number.”

“In terms of artificial intelligence research in trying to reproduce the many, many different abilities that a human has, and getting the AI to work in a similar kind of way,” said Holden, “it's not something you'd really try and just put a number on. You can't measure AI from 0 to 100 and say we're currently at 22. It's nothing like that straightforward.”

That said, he noted “at the moment you would have some obvious examples of fairly big steps forward that it's probably fair to say were quite unexpected. Watson winning at Jeopardy I think was a huge achievement and came as quite a surprise,” Holden told us, but noted that “when you look into how that was achieved it's not so surprising. IBM put a lot of good people and a lot of money into that effort.”

IBM's question answering system, Watson, triumphed over human contestants over a three day bout of the Jeopardy! game shown 2011. The victory was not just for Big Blue's marketing department, but also for AI's subfield of open domain question answering.

While information retrieval has existed for as long as computers have had indexed memory, the ability to answer an open-ended question required Watson to disambiguate and contextualise queries which were supplied to it language which required interpretation.

Holden said he “got the impression, from having seen some of Watson's team talk, that even they were quite surprised” at the Jeopardy! win. “I may be wrong there, but they were rightly quite chuffed at the results they got.”

Would you like to play a game?

“Games are great for AI, because there are things like Go that are very very hard, but games are sufficiently well defined that you can actually make some progress with them. They give a natural kind of yardstick.” said Holden.

Earlier this year, Google's DeepMind artificial intelligence team published a paper showing that their machine had, for the first time, beaten a top human player in the board game Go. Holden explained: “If you have a game like Go where humans have been superior a long time it gives you a very specific goal to work towards. There was a step forward in Go a few years back, with something called Monte Carlo tree search, which bordered on quite a big leap all in one go.”

Monte Carlo tree search (MCTS) is an algorithm which allows machines to find the optimal decision from a search tree, by randomly sampling the results of the possible decisions it could make, although compared to Google's DeepMind paper, the earlier MCTS steps forward “achieved considerably less publicity” said Holden.

“The more recent step appears to have got it to a point of being able to play at the Dan level which is something it's probably fair to say nobody was expecting to happen for a while yet, because Go is, combinatorially, so complicated,” he added.

“I'm in the process of reading what's been published at the moment and as far as I can see it's largely a case of combining a couple of fairly recent technologies and an awful lot of computing power that has managed to get them there.”

There are other games that the field of AI is also attempting to explore, said Holden, who said “a two-player version of poker has been done by AI, that's at a human-like level.”

As far as the senior lecturer was aware, “the full game with multiple players is still something where humans will dominate, because in order to be good at poker you have to model the other players in terms of their risking and bluffing behaviours.”

Holden told us of a game called Mao, “where the only rule is that you can't tell a new player what the rules are. It's a card game. So a new player in Mao has to infer, by playing, what the rules of the game are. And that is, as far as I'm aware at the minute, completely beyond any of the AI stuff that we have.”

Research vs industry

While research was more interesting than industry from Holden's perspective, as a researcher, he acknowledged that “there are companies doing really interesting things at the moment.”

“Usually if I want to talk about industry in this area that has been interesting, it's not necessarily such a current example,” said Holen, “but I was a PhD student with Mike Lynch – under the same supervisor – and he set up Autonomy and obviously he's recently sold it, and that was a technology company which was doing interesting things from his academic grounding in machine learning, and taking people and exploiting that kind of work.”

AI has historically had a way of seeping itself into your life in a way that's almost invisible

There is a lot going on in industry at the moment, according to Holden, and the technology has been applied for quite a long time. “My group did a lot of work with Glaxo Smith Kline for a few years in the area of drug design, and putting AI in the drug design process.”

Holden's team currently works with biochemists, as he told us, “on stuff like protein localisation within cells. This kind of stuff has the potential to be spun out in the right context and a lot of people are doing that. AI has historically had a way of seeping itself into your life in a way that's almost invisible, and that will continue.”

“A lot of people wouldn't accept even that their phone can recognise their voice as AI, because it's so ubiquitous,” he said, “but that's decades of AI research required in order to get it inside your phone and work reliably. That's a perfect example.”

State of the art

Defining what was at the very forefront of the AI field was not so obvious.

“Silicon Graphics, and possibly some other people at one point a few years ago, put in a lot of money to make systems that construct huge decision trees, and they did some quite interesting things there,” Holden said, “but maybe it was a bit premature and not the right technology.”

People have had some success recently in exploiting deep nets, which are essentially just very, very, very big neural networks – with some extras in there that are quite clever to actually get them do so something interesting and useful – but the idea of doing those kind of died off for a while and then came back.

Advances in that area slowed down “because people were finding, with the available computaional power a couple of decades back, that there didn't seem much to be gained by making the networks deep – meaning you have many many layers of processor.”

“What's happened more recently is people have come up with some better ways of forming those architectures and training them, and have used that in conjunction with much more cheaply available computing power, and also in the case of a company like Google, that they have access to a very large quantity of data.”

“The computing power, the network architecture, and the availability of a very large amount of data together have combined to let you do some really interesting things – and that's kind of fed into the developments with Go as well. Of course, that's a more problematic approach if you don't have much data.”

It's all about the data

The size and quality of the data is pretty much vital, suggested Holden, for those wanting to deep network: “The architectures tend to be very large, you have a lot of parameters to set,” said Holden, and so “you tend to need a lot of data to get them to do something useful. So for applications where you have that available then that's brilliant.”

“It's not so clear-cut when you have small quantities of data. In that context it seems at the moment that there are alternative technologies that will still dominate when you have little data to deal with.” said Holden.

He explained: “So things like Bayesian networks, and the area of Bayesian inference in general, where you have a model which you can tune using the data, and where, if you have a smaller amount of data, you can more carefully craft the model to be appropriate to it. And for instance you can get not just a prediction but a measure of how confident you are in your prediction in quite a nice way.”

“There are different reasons where that kind of architecture, where that kind of technology, can be a more appropriate thing to use,” he added, “but certainly at the minute it seems as if you have a great deal of data then there are really interesting things to be done with deep network architectures.”

The future

It is difficult to say where the field will be going, said Holden, "but the one thing you probably can take for granted is that it'll be almost invisible because immediately when something becomes a solved problem in AI, it stops looking like AI."

The fact that your car can do automatic braking, because it's realised that the car in front of it (a) is a car and (b) is slowing down with a particular profile that means it's braking heavily and (c) has worked out how to brake your car in such a way that you don't get whiplash, and so on and so on, these things are not straight-forward problems; but they don't look like AI once you get them deployed.

"So it's going to be as it always has, this stuff will seep in, pretty much under the radar, over time." ®


Biting the hand that feeds IT © 1998–2017