AlphaGo had attained expert human-level performance at Go, and had beaten European professional Go player Fan Hui in a match in London in October
San Francisco: Google DeepMind’s artificial intelligence (AI) system beat a top-ranked player of the board game Go in a televised match in South Korea, providing the first evidence that the company’s software has attained super-human status at a challenging 2,500-year-old strategy contest.
The Internet company is playing a five-match tournament against Lee Sedol, who Google said has been the top-ranked Go player of the past decade, to show off the capabilities of software developed by its London-based AI subsidiary DeepMind. Lee’s loss was announced by match organizers in Seoul.
“It’ll never get tired and it’ll never get intimidated," said DeepMind co-founder Demis Hassabis, at a press conference on Tuesday ahead of the match. “These are the main advantages."
DeepMind, part of Mountain View, California-based Alphabet Inc., revealed its software, called AlphaGo, in January in a paper published in science journal Nature. AlphaGo had attained expert human-level performance at Go, and had beaten European professional Go player Fan Hui in a match held in the company’s London offices in October. When the paper was published, experts said they had thought a Go-playing AI system was five to 10 years away. The first win against Lee is further confirmation of the power of DeepMind’s system.
What sets DeepMind’s approach apart from traditional Go- playing software is its use of a technology called a neural network, which lets computers learn from experience, rather than specific programming. This enables it to learn by studying example games, then by playing millions of games against itself, inferring the rules and, eventually, developing long-term strategies it can use to try to win. The system also uses a more traditional computing technique called Monte Carlo Tree Search.
Go, also known as Baduk, is a game that sees players battle to take territory on a board by taking turns placing stones on the intersections of a grid. There is only one type of piece and players choose to play as either white or black. On a 19-by-19 Go grid, there are more possible board configurations than there are atoms in the known universe.
“I’m somewhat shocked," Lee told reporters after the match. “I didn’t really imagine I’d lose. I didn’t foresee AlphaGo would play Go so perfectly."
The game is played widely in Asia, with tournaments awarding prizes in the hundreds of thousands of dollars. Top players like Lee are treated like celebrities—DeepMind first contacted him through his agent, rather than reaching out directly, Hassabis said in January.
“Whenever you have a large number of people using something, we can probably use machine intelligence to make it more efficient," Alphabet chairman Eric Schmidt said in Seoul .
Google already uses AI across its products, for services like automatically writing e-mails, recommending YouTube videos and providing the brains of its in-development self-driving cars. The next wave of AI technologies will use techniques akin to those developed by DeepMind, but the company hasn’t yet disclosed any particular products that use DeepMind’s techniques.
“Health care is one of the main things we’re looking at next," Hassabis said. “The system and techniques that we’re using for AlphaGo should be useful for anywhere, any kind of problem where there’s lots and lots of data and you’re trying to understand the structure in that data and make some kind of decision. Bloomberg
Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.
Never miss a story! Stay connected and informed with Mint.
our App Now!!