What we learned in Seoul with AlphaGo
March 16, 2016
Go isn’t just a game—it’s a living, breathing culture of players, analysts, fans, and legends.
Over the last 10 days in Seoul, South Korea, we’ve been lucky enough to witness some of
that incredible excitement firsthand. We've also had the chance to see something that's never
happened before: DeepMind's AlphaGo took on and defeated legendary Go player,
Lee Sedol (9-dan professional with 18 world titles), marking a major milestone for artificial
intelligence.
Go may be one of the oldest games in existence, but the attention to our five-game tournament
Pedestrians checking in on the AlphaGo vs. Lee Sedol Go match on the streets of Seoul (March 13)
Go may be one of the oldest games in existence, but the attention to our five-game tournament
exceeded even our wildest imaginations. Searches for Go rules and Go boards spiked in the U.S.
In China, tens of millions watched live streams of the matches, and the
“Man vs. Machine Go Showdown”
hashtag saw 200 million pageviews on Sina Weibo. Sales of Go boards even surged in Korea.
Our public test of AlphaGo, however, was about more than winning at Go. We founded DeepMind
Our public test of AlphaGo, however, was about more than winning at Go. We founded DeepMind
in 2010 to create general-purpose artificial intelligence (AI) that can learn on its own—and, eventually,
be used as a tool to help society solve some of its biggest and most pressing problems, from
climate change to disease diagnosis.
Like many researchers before us, we've been developing and testing our algorithms through games.
Like many researchers before us, we've been developing and testing our algorithms through games.
We first revealed AlphaGo in January—the first AI program that could beat a professional player at
the most complex board game mankind has devised, using deep learning and reinforcement learning.
The ultimate challenge was for AlphaGo to take on the best Go player of the past decade—Lee Sedol.
To everyone's surprise, including ours, AlphaGo won four of the five games. Commentators noted
To everyone's surprise, including ours, AlphaGo won four of the five games. Commentators noted
that AlphaGo played many unprecedented, creative, and even“beautiful” moves. Based on our
data, AlphaGo’s bold move 37 in Game 2 had a 1 in 10,000 chance of being played by a human.
Lee countered with innovative moves of his own, such as his move 78 against AlphaGo
in Game 4—again, a 1 in 10,000 chance of being played—which ultimately resulted in a win.
The final score was 4-1. We're contributing the $1 million in prize money to organizations that
The final score was 4-1. We're contributing the $1 million in prize money to organizations that
support science, technology, engineering and math (STEM) education and Go, as well as UNICEF.
We’ve learned two important things from this experience. First, this test bodes well for AI’s potential
We’ve learned two important things from this experience. First, this test bodes well for AI’s potential
in solving other problems. AlphaGo has the ability to look “globally” across a board—and find solutions
that humans either have been trained not to play or would not consider. This has huge potential for
using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas.
Second, while the match has been widely billed as "man vs. machine," AlphaGo is really a human
achievement. Lee Sedol and the AlphaGo team both pushed each other toward new ideas,
opportunities and solutions—and in the long run that's something we all stand to benefit from.
But as they say about Go in Korean: “Don’t be arrogant when you win or you’ll lose your luck.”
But as they say about Go in Korean: “Don’t be arrogant when you win or you’ll lose your luck.”
This is just one small, albeit significant, step along the way to making machines smart. We’ve
demonstrated that our cutting edge deep reinforcement learning techniques can be used to
make strong Go and Atari players. Deep neural networks are already used at Google for specific
way from a machine that can learn to flexibly perform the full range of intellectual tasks
a human can—the hallmark of trueartificial general intelligence.
With this tournament, we wanted to test the limits of AlphaGo. The genius of Lee Sedol did
Demis and Lee Sedol hold up the signed Go board from the Google DeepMind Challenge Match
With this tournament, we wanted to test the limits of AlphaGo. The genius of Lee Sedol did
that brilliantly—and we’ll spend the next few weeks studying the games he and AlphaGo played
in detail. And because the machine learning methods we’ve used in AlphaGo are general purpose,
we hope to apply some of these techniques to other challenges in the future. Game on!