Computer Beats Human at the World’s Most Complex Board Game

January 28, 2016 | Elizabeth Knowles

Go game board
Photo credit: Chad Miller/Flickr (CC BY-SA 2.0)

The Chinese game Go is said to be even more complicated than chess

In 1997, when Deep Blue, an IBM computer, beat a human chess champion in a tournament for the first time, it was a huge breakthrough for the artificial intelligence community and for computer scientists.

Games are a great way to challenge machine minds because they require thinking ahead, but also a certain level of creativity. Deep Blue could explore 200 million possibilities per second, so it had an advantage over any human, yet it still lost one game and tied three more.

More recently, the challenge has been to create a machine that could win at games of Go — a traditional Chinese board game. The rules of Go are fairly simple — use your stones to surround your opponents’ — but the strategy is incredibly complex. According to the researchers, the number of possible positions is larger than the number of atoms in the universe — although, to be fair, this is true for chess as well.

SEE ALSO: Watch: This Record-Breaking Robot Just Solved a Rubik’s Cube in about 1 Second

AlphaGo, a machine developed by Google’s DeepMind division in the UK has succeeded in the challenge and won 499 out of 500 games against other AI programs and all games in a five-match tournament against three-time European Go champion Fan Hui.

According to Fan, AlphaGo is very life-like: “It's very strong and stable, it seems like a wall. For me this is a big difference,” as compared to humans who get tired and sometimes make big mistakes under pressure. “I know AlphaGo is a computer, but if no one told me, maybe I would think the player was a little strange, but a very strong player, a real person."

DeepMind researchers took an interesting approach to programming AlphaGo. Rather than simply looking at all the possible moves at any point in the game and seeing how their branch off into future moves, they combined this method with neural networks that were trained on 30 million moves from games played by real experts. Thus, the machine got very good at predicting the moves players would make.

“Because the methods we used are general-purpose, our hope is that one day they can be used to help address some of society’s toughest and most pressing problems,” said Demis Hassabis, vice-president of engineering at Google DeepMind, the British-based research centre that led the work.

AlphaGo’s next challenge? A match against world champion Lee Sedol in March.

Watch the video below to hear more about of Fan’s fascinating commentary about his frustrating and eye-opening experience.


Hot Topics

Facebook comments