Google’s DeepMind and Blizzard team up to perform cutting-edge research into artificial intelligence (AI), by teaching a computer how to play Starcraft.
Computer-controlled characters in video games are hardly anything new. Super Mario Bros. would have probably been an incredibly dull experience if there hadn’t been little computer-generated critters to stomp on or a fire-breathing dragon-turtle to overcome at the very end. But there’s a steep difference between a computer-controlled character in a retro video game and a computer-controlled player inside an expansive and complex real-time strategy game. Starcraft II is that game, and Google’s AI division, DeepMind, believes that it might be the perfect environment for teaching their AI to think and learn as a human does.
The sci-video game might not be the first thing that comes to mind when most people think of strategy. In fact, it might not be the first game that most people would think of. Chess, or the complex board game, Go, might seem like more suitable tests of a computer’s ability to strategise and reason. But then again, DeepMind has already developed programmes which have conquered these arenas and, unlike such board games, Starcraft requires players to make choices without being able to see every move being made by their opponents. Starcraft is a game which will teach DeepMind’s AI player, or “agent” as they designate it, to act tactically based on memory and to think rationally about how its opponents might be playing the game. The goal is very much to anticipate and react to moves it can’t see well ahead of time. Unfamiliar territory for a computer is actually the best test for its ability to react and adapt. The hope is that this will provide the basis for other computer systems to anticipate potential problems and adapt to meet them beforehand.
DeepMind’s own website states that “[t]esting our agents in games that are not specifically designed for AI research and where humans play well, is crucial to benchmark agent performance.”So, in order to really test the limit’s of DeepMind’s agent, certain restrictions will apply. The agent will not have access to any of the code of the game and will only be able to see and experience what any human player would have on their screen. A dataset of past games will be provided for the agent as study material so that it can get a grasp of how people have played the game before and it will be introduced to certain elements of gameplay through smaller mini-games that test specific skills and tasks before tackling the full challenge.
It will be highly interesting to see how the agent behaves. Starcraft II does not have a strictly linear narrative. Players are given the option of taking side missions to bolster their resources and their armies. It has already been shown in the DeepMind Challenge Match of Go, that artificial intelligence may act in ways which seem bizarre or illogical to human players, but which have immense long term strategic value. In that game, DeepMind’s programme, AlphaGo, managed to defeat the 18-time world champion of Go, Lee Sedol, in four out of five games of the game, baffling the champion and commentators alike.
The trials should give invaluable data on the capabilities and limitations of how AI can learn and compete against human opponents in a brand new forum. While this research is largely academic, it is somewhat comforting that the name for their military-strategy learning toolkit is SC2LE and not, say, Skynet.