A computer has bested humanity at one of the most complex strategy games ever devised.

Researchers at Google have developed a program that can excel at the game of "go," which originated in China and is considered a tougher problem for a machine than other strategy games such as chess. The program has defeated the European champion of the game. Now its developers say the same technology may be used to conquer problems in everything from medicine to climate modelling.

A paper describing the program appears today in the journal Nature.

The game of go began in China more than 2,500 years ago. Players use white or black stones to attempt to fence off territory. Players win by both cordoning off the largest area and encircling the maximum number of their opponent's stones.

"It's a very beautiful game with extremely simple rules that lead to profound complexity," says Demis Hassabis, a researcher with Google Deep Mind, which developed the new program. The number of possible board positions is far greater than chess. "The strongest go programs until now have only been as good as amateur players," he says.

Enter AlphaGo, an artificial-intelligence (AI) computer program designed by Hassabis and his colleagues. AlphaGo is programmed using so-called deep-neural networks, which are inspired by biological brains. The networks have millions of neuron-like connections that AlphaGo can rearrange as it plays. In essence, the program reprograms itself in order to "learn" the optimum strategy. Similar networks have proven remarkably effective in recent years at learning tasks such as recognizing objects in photos.

AlphaGo is actually comprised of two neural networks. The first network searches for possible moves. The second network then evaluates each move under consideration to determine whether it will give AlphaGo the upper hand later in the game. Unlike chess-playing algorithms, the AlphaGo program doesn't crunch through every possible permutation of the game, it just gets a sense of which move provides it the upper hand. "This approach makes AlphaGo's search much more humanlike than previous approaches," says David Silver, another Google researcher.

YouTube

At first, AlphaGo was terrible. But after reviewing 30 million moves from human players and playing millions more games against itself, it caught on.

The researchers first tried it against other go-playing computer programs, and it whomped them. "In fact, AlphaGo even beat those programs after giving them four free moves of head start at the beginning of each game," Silver says.

In October, the program was put up against the European go champion, Fan Hui. Only a few spectators were in the room as the game unfolded. "It was very thrilling actually," Silver says. In the end, AlphaGo emerged victorious.

For now, the program is restricted to playing go, but Hassabis says that Google is hopeful it could quickly learn many other tasks: everything from restaurant recommendations to diagnosing medical images.

The program is great but probably has limits, says Gary Marcus, the CEO of Geometric Intelligence, which seeks to bring concepts from cognitive science to machine learning. "It remains to be seen what it can and can't learn," he says. "There is a very long history of extravagant claims for AI, and a long history of AI programs that are actually fairly narrow [in what they can do]."

For now, Hassabis says the AlphaGo team remains focused on the game at hand. In March, the program will play legendary player Lee Sedol in a series of five matches in South Korea.

Copyright 2016 NPR. To see more, visit NPR.

300x250 Ad

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate