Google Creates AI That Can Play Atari Games Better Than You

Constantly looking for the next big leap in machine-learning, a Google project has set one of the most advanced AI in the world to playing video games. The project comes from DeepMind Technologies, a London-based start-up that Google purchased last year. Researchers showed the AI 49 games on the Atari 2600 and were told to play them without any additional instructions. The above video from the scientific journal Nature, which published an article detailing the project on February 26, shows some of the progress the AI made.

When the AI did well, it was rewarded, in much the same way people do when training a dog. Google’s system exceeded the skills of expert human players in 29 games and far surpassed previous records for AI in 43 of them.

Speaking to Bloomberg, vice president of engineering at Google and DeepMind co-founder Demis Hassabis said that this represents "the first time anyone has built a single learning system that learns directly from experience and manage a wide range of challenging tasks."

While it might seem a little odd to have one of the best artificial intelligence systems in the world grinding through older games, the experiment has some very practical applications. To learn anything, whether you’re an animal or a computer, you need to be able to adapt after making mistakes and predicting new outcomes to your choices. Old computer games are useful for this because their rules are very simple and failure is clear. AI isn’t anywhere near handling the more complex choices of three-dimensional games, shooters, or MOBAs for example, but if it can learn that going left at time X will always result in a loss, that’s easy for the system to understand.

"It's mastering and understanding the structure of these games, but we wouldn't say yet it's building conceptual knowledge or abstract knowledge," Demis says. "The ultimate goal here is to build smart, general-purpose machines, but we're many decades off from doing that."

Related Articles