This article is more than 1 year old

Google DeepMind cyber-brain cracks tough AI challenge: Beating a top Go board-game player

Robo elbows bio Go pro

A Google-designed artificial intelligence system has for the first time beaten a top human player at the board game Go.

A team of researchers from Google DeepMind said in a Nature article [PDF] that their AlphaGo program is not only able to beat 99 per cent of all previous Go-playing systems, but has also beaten Fan Hui – Europe's top player in all five games of a head-to-head series.

Jon Diamond, president of the British Go Association, said he expected "it would be at least 5-10 years before a program would be able to beat the top human players."

"This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away," the UK-based DeepMind team proclaimed today.

Though much attention has been given to the efforts of computer engineers to create systems capable of beating grand masters at chess, besting Go players has been seen as possibly an even greater challenge and a major step in the development of machine learning and artificial intelligence.

A centuries-old game, Go pits two players who alternate turns placing pieces on a grid-lined board. A player wins points by surrounding an opponent's pieces with their own.

Youtube Video

Go presents a particularly difficult scenario for computers, as the possible number of moves in a given match (opening at around 2.08 x 10170 and decreasing with successive moves) is so large as to be practically impossible to compute and analyze in a reasonable amount of time.

While previous efforts have shown machines capable of breaking down a Go board and playing competitively, the programs were only able to compete with humans of a moderate skill level and well short of the top meat-based players.

To get around this, the DeepMind team said it combined a Monte Carlo Tree Search method with neural network and machine learning techniques to develop a system capable of analyzing the board and learning from top players to better predict and select moves.

The result, the researchers said, is a system that can select the best move to make against a human player relying not just on computational muscle, but with patterns learned and selected from a neural network.

"During the match against [European Champion] Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network – an approach that is perhaps closer to how humans play," the researchers said.

"Furthermore, while Deep Blue relied on a handcrafted evaluation function, the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement methods."

Go pro Hajin Lee, secretary general of the International Go Federation, said: "AlphaGo's strength is truly impressive. I was surprised enough when I heard Fan Hui lost, but it feels more real to see the game records.

"My overall impression was that AlphaGo seemed stronger than Fan, but I couldn't tell by how much. I still doubt that it's strong enough to play the world's top professionals, but maybe it becomes stronger when it faces a stronger opponent."

AlphaGo's next matchup will be with the world's top human player, Lee Sedol. That meeting is set to take place in March. ®

Stop press: Cheeky Facebook has just updated a paper its AI geeks wrote in November on using neutral networks to play Go.

More about

TIP US OFF

Send us news


Other stories you might like