Given all the press surrounding the U.S. presidential campaign, you may have missed something truly remarkable.
AlphaGo, an artificial intelligence developed by Google’s DeepMind, won four out of five games of Go against one of the best players on the planet — Lee Se-dol, from South Korea.
Decades ago, computers mastered checkers. In 1997, Deep Blue defeated Gary Kasperov in chess. And in 2011 Watson conquered Jeopardy. But Go — a notoriously difficult game from ancient China — was considered our last line of defense.
Is this the robot apocalypse foretold in countless science-fiction stories? Yes and no. It is certainly the end of an era. AlphaGo can out maneuver a human player in what is considered one of our most sophisticated and cerebral games. So it might be “game over” for defining AI performance on the basis of board games.
But that is not the whole story. The really interesting stuff lies deep inside the deep learning algorithms of DeepMind’s AI. AlphaGo does not rely on preprogrammed instructions to make its moves. In fact, the engineers who built it have no idea what it will do. Its decisions are an emergent phenomenon that are a product of the machine itself.
For now, all we need to worry about is a game. But five years from now, deep learning will be all over the place and making real decisions about every aspect of our lives. So the time to start thinking about the consequence of this technology is now … when it is still just a game.
I’m David Gunkel, and that’s my perspective