He still lost the tournament.
Back in January, we reported that a computer had beaten three-time European Go champion Fan Hui at a game of Go. The news continued with a tournament between AlphaGo — the Go-playing AI — and world champion Lee Sedol. Although he was confident that he would win, Sedol lost the first three games in the tournament. It was starting to look like AlphaGo was unbeatable, but Sedol eventually won the fourth game, before losing the fifth and final one.
What is Go? It is a traditional Chinese board game where two players place black and white flat stones on the intersections of a grid on a board and aim to surround their opponent’s pieces. The rules may sound simple, but the strategy is incredibly intricate and according to the American Go Association, Go is the most complex 2-player game out there.
“Go is incomparably more subtle and intellectual [than Chess],” said Sedol according to Wired. This is because at any point in the game, there are many more possible moves that can be made. If we consider just the first move, there are 10 possible chess pieces that can be chosen, whereas there are 361 intersections on a Go board where a player can choose to put his or her first stone.
Although Go originated in Asia, it has been played by many well-known western mathematicians, physicists and computer scientists like Albert Einstein, John Nash, and Alan Turing.
You might think that with everything computers can do nowadays, computer scientists would have been able to create a program to beat champion Go players a long time ago. However, as is mentioned in the NatandLo video below, the world is messy. It’s easy for a computer to simulate concepts like how galaxies move — something that is hard for humans — but recognising that a tree is a tree is much more challenging even though that is easy for humans.
One of the techniques that is currently being used in computer learning is artificial neural networks. Instead of writing programs that can solve a problem, computer scientists write programs that can learn to solve a problem. Watch the video below, to find out more about how machine learning and artificial neural networks work.
AlphaGo uses artificial neural networks as well as other techniques.
According to the CBC, “DeepMind team built ‘reinforcement learning’ into AlphaGo, meaning the machine plays against itself and adjusts its own neural networks based on trial-and-error. AlphaGo can also narrow down the search space for the next best move from the near-infinite to something more manageable. It also can anticipate long-term results of each move and predict the winner.”
"While the baroque rules of Chess could only have been created by humans, the rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe, they almost certainly play Go," the American Go Association quotes Edward Lasker, a chess grandmaster. I wonder whether they could beat AlphaGo!
You might also like: Robots Could Learn New Skills by Watching YouTube