In the first of a series of games pitting Google's AI computer against a human world champion in the ancient game of Go, Google DeepMind's AlphaGo program has narrowly taken Round 1 from Lee Sedol.

Sedol resigned with more than 28 minutes remaining on his clock, after a game that included 186 moves. The game was played in Sedol's native South Korea, the first in a five-game match that carries a prize of about $1 million.

A brief recap from Google:

"They were neck-and-neck for its entirety, in a game filled with complex fighting. Lee Sedol made very aggressive moves but AlphaGo did not back down from the fights. AlphaGo took almost all of its time compared to Lee Sedol who had almost 30 minutes left on the clock."

"I was very surprised," Lee said after the match, according to The Verge. "I didn't expect to lose. [But] I didn't think AlphaGo would play the game in such a perfect manner."

Sedol and AlphaGo will play four more times in the next week, taking breaks on Friday and Monday. You can watch their first game — and hear analysis of the strategies involved — on DeepMind's YouTube channel.

YouTube

In it, Sedol is seen rubbing his neck and scratching his head as the close match progresses. In the end, he carefully analyzed the board before deciding to resign. Sedol then moved several pieces around on the board, running through simulations of what might have happened if he had played differently.

The win is the latest sign that a computer can make complex choices to defeat an elite human competitor in Go, a game that's seen as being tougher than chess. Last October, AlphaGo beat European champion Fan Hui, but he isn't ranked on Sedol's level.

Here's how the International Go Federation described that matchup:

"In the first of the five AlphaGo-Fan games, both sides played conservatively and AlphaGo won by 2.5 points. In the rest of the match Fan played aggressively, but AlphaGo outfought him and won four times by resignation. Fan described AlphaGo as "very strong and stable ... like a wall."

As NPR's Geoff Brumfiel reported Tuesday:

"The Google program, known as Alpha Go, actually learned the game without much human help. It started by studying a database of about 100,000 human matches, and then continued by playing against itself millions of times.

"As it went, it reprogrammed itself and improved. This type of self-learning program is known as a neural network, and it's based on theories of how the human brain works.

"AlphaGo consists of two neural networks: The first tries to figure out the best move to play each turn, and the second evaluates who is winning the match overall."

Copyright 2016 NPR. To see more, visit NPR.

300x250 Ad

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate