This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at http://www.theguardian.com/technology/2016/mar/09/google-deepmind-alphago-ai-defeats-human-lee-sedol-first-game-go-contest

The article has changed 4 times. There is an RSS feed of changes available.

Version 1 Version 2
Google's AlphaGo AI defeats human in first game of Go contest Google's AlphaGo AI defeats human in first game of Go contest
(about 2 hours later)
Google’s computer program AlphaGo defeated its human opponent, South Korean Go champion Lee Sedol, on Wednesday in the first game of a historic five-game match between human and computer. Lee Sedol started with a bow, a traditional Korean gesture of respect for an opponent who could neither see him nor sense his presence.
AlphaGo’s victory in the ancient Chinese board game is a breakthrough for artificial intelligence, showing the program developed by Google DeepMind has mastered one of the most creative and complex games ever devised. The world champion at Go an ancient Chinese board game looked nervous. His eyes darted from side to side. He took a sip of water, and made his first move.
Commentators said the match was close, with both AlphaGo and Lee making some mistakes. The result was unpredictable until near the end. Lee’s loss was a shock to South Koreans and Go fans. Two weeks ago, the 33-year-old was confident of a sweeping victory, but he sounded less optimistic a day before the match. Lee could be forgiven some nerves: his opponent was AlphaGo, an artificial-intelligence programme designed by Google DeepMind, their five-game series billed as a landmark face-off between human and computer. “History is really being made here,” said commentator Chris Garlock, as the first game in the series started.
“I was very surprised because I did not think that I would lose the game. A mistake I made at the very beginning lasted until the very last,” said Lee, who has won 18 world championships since becoming a professional Go player at the age of 12. Lee said AlphaGo’s strategy was “excellent” from the beginning. Three and a half hours later history had indeed been made: AlphaGo won, shocking many observers of the game and marking a major breakthrough for AI.
Yoo Chang-hyuk, another South Korean Go master who commentated on the game, described the result as a big shock and said that Lee appeared shaken at one point. Hundreds of thousands of people watched the game live on TV and YouTube. The remaining four matches will end on Tuesday. Go isn’t played much in the west, but it is widely enjoyed throughout east Asia. Two players take turns to place tiles on a board, trying to gain territory by arranging their tiles in strategic shapes or patterns. The surface level simplicity is deceptive: there are trillions of possible moves. The almost endless possibilities make it difficult to follow a particular strategy, and mastering the game means using intuition to react to any number of possible twists or turns.
Computers conquered chess in 1997 in a match between IBM’s Deep Blue and chess champion Garry Kasparov, which according to DeepMind’s CEO Demis Hassabis leaves Go as “the only game left above chess”. Computers had already conquered chess, when in 1997 IBM’s Deep Blue defeated world champion Garry Kasparov. Go was “the only game left above chess”, as DeepMind’s CEO Demis Hassabis put it before Wednesday’s showdown.
Top human players rely heavily on intuition to choose among a near-infinite number of board positions in Go, making the game extremely challenging for artificial intelligence. Sedol, a South Korean who sports a bowlish haircut and looks younger than his 33 years, spent much of the match leaning forward, cradling his chin on his hand. Sat opposite him was DeepMind developer Aja Huang, who physically placed the stones on the board in positions chosen by AlphaGo. Lee played aggressively from the outset, putting AlphaGo on the defensive.
AI experts had forecast it would take another decade for computers to beat professional Go players. That changed when AlphaGo defeated a European Go champion last year, in a closed-door match later published in the journal Nature. Since then, AlphaGo’s performance has steadily improved. “We are very excited about this historic moment. We are very pleased about how AlphaGo performed,” said Hassabis. There are more possible Go positions than there are atoms in the universe
DeepMind’s team built “reinforcement learning” into AlphaGo, meaning the machine plays against itself and adjusts its own neural networks based on trial and error. AlphaGo can also narrow down the search space for the next best move from the near-infinite to something more manageable. It can also anticipate long-term results of each move and predict the winner. The match was close, with both AlphaGo and Lee making mistakes, but eventually Lee conceded that AlphaGo had built an insurmountable lead. AI had scored a victory in one of the most creative and complex games ever devised.
AlphaGo’s win over a human champion shows computers can mimic intuition and tackle more complex tasks, its creators say. Lee maintained a meek posture in a post-game press conference, hanging his head and at times looking to be on the verge of tears. He expressed surprise at his opponent’s strong performance. “I didn’t know AlphaGo would play such a perfect game,” he said.
The result shocked many Go aficionados. As recently as two weeks ago Lee said he was confident of a sweeping victory. AlphaGo’s victory over the European Go champion in October last year – an achievement many thought was at least a decade away – should have acted as a warning.
AlphaGo’s mastery of Go is so significant because of the near-infinite number of board positions available and the intuition that top human players rely upon to pick between them. Hassabis described Go as “the most elegant game that humans have ever invented”, with “simple rules [that] give rise to endless complexity”. “There are more possible Go positions than there are atoms in the universe,” he added.
DeepMind’s team built “reinforcement learning” into the programme, meaning the machine played against itself and adjusted its own neural networks based on trial and error. AlphaGo is capable of narrowing down the search space for the next best move from the near-infinite to something more manageable. It can also anticipate long-term results of each move and predict the winner.
The match took place in a quiet room as reporters watched on a projector screen from a separate press area. The game, including commentary, was live-streamed on YouTube.
AlphaGo doesn’t get tired, doesn’t forget, doesn’t worry
Ben Lockhart, a 22-year-old who took up Go as a child in New York and who moved to South Korea four years ago to study the game full time, welcomed the unprecedented level of attention brought by the AlphaGo contest. “That no one really knows this game [in the west] has been frustrating for a long time. At least after this more people will have heard of Go,” he said.
Maybe part of the reason that the game has struggled to make inroads in the west, where it has no history, is that it is a less than scintillating spectator activity. The action inches along, with long pauses as players eye the board and contemplate their next move. It isn’t always obvious to viewers unfamiliar with the game who is winning. After a couple of hours of play, most reporters were slouched in their chairs playing with their smartphones.
Lee has four more chances to beat AlphaGo and claim the $1m in prize money. The stress and fatigue affecting the South Korean won’t be a problem for his opponent when they reconvene for game two on Thursday 10 March.
“Historically, Go has been a game of people testing themselves against each other, and themselves. It has been a game of character,” Andrew Okun, President of the American Go Association, who flew to Seoul to watch the showdown. AlphaGo, he added, “doesn’t get tired, doesn’t forget, doesn’t worry”.