A Google computer program again defeated its human opponent in a final match of the ancient Chinese board game "Go," sealing its 4-1 victory Tuesday.
The week-long showdown between South Korean Go grandmaster Lee Sedol and AlphaGo, Google DeepMind's artificial intelligence program, further confirmed the computer software has mastered a major challenge for artificial intelligence.
The series was one of the most intensely watched events in the past week across Asia. The human-versus-machine battle hogged headlines, eclipsing reports of North Korean threats of a pre-emptive strike on the South.
U.S. & World
Stories that affect your life across the U.S. and around the world.
The final match was too close to call until the very end. Experts said it was the best of the five games in that Lee showed performed at his best and AlphaGo made few mistakes. Lee resigned about five hours into the game, failing to offset the extra points that AlphaGo got for playing a white stone.
The final match was broadcast live on three major TV networks in South Korea and on big TV screens in downtown Seoul.
Google estimated that 60 million people in China, where Go is a popular pastime, watched the first match on Wednesday.
Before AlphaGo's victory, the game was seen as too complex for computers to master. Go fans across Asia were astonished when Lee, one of the world's best Go players with 18 international championships, lost the first three matches.
After his third loss, Lee said he could not find any weaknesses in the 2-year-old computer system's playing. Some in South Korea raised questions about the fairness of the match, while others in the Go community regretted having underestimated AlphaGo's ability.
Lee's win over AlphaGo in the fourth match, on Sunday, showed the machine was not infallible, despite its lack of vulnerability to emotions or fatigue.
Go players take turns placing black or white stones on 361 grid intersections on a square board. Stones can be captured when they are surrounded by those of their opponent.
To take control of territory, players surround vacant areas with their stones. The game goes on until both sides agree there are no more places to put stones, or until one side decides to resign in an apparent loss.
Lee, 33, said he found AlphaGo's handling of surprise moves was weak. The program also played less well with a black stone, which plays first and has to claim a larger territory than its opponent to win.
Lee chose not exploit that weakness in the final match, offering to play with a black stone. That would have made a victory over AlphaGo more decisive.
Google officials say the company wants to apply technologies used in AlphaGo in other areas, such as smartphone assistants, and ultimately to help solve real-world problems.
AlphaGo stands apart from traditional artificial intelligence programs that rely on brute-force calculations to predict all possible outcomes. Such an approach is not feasible in Go, which involves a near-infinite number of board positions. Google's DeepMind team input data from 100,000 games played by human experts and then had the program play against itself many times to "learn" from its mistakes.