Artificial intelligence developed by the likes of Google’s DeepMind and Elon Musk’s OpenAI is taught within the confines of game worlds – including navigating around mazes, dodging deadly cliffs, playing laser tag and flying through space.
In a mission to build a general AI capable of solving any problem put in front of it, DeepMind is open-sourcing its game code to everyone. The software and 14 levels from DeepMind Labs will be put on GitHub later this week.
And, not to be outdone, Elon Musk’s own OpenAI is also releasing its own ‘computer training ground’ called Universe. Universe is open-source software that supports Gym; OpenAI’s toolkit for testing its algorithms which help software play games, for example, using a reward scheme.
“The [AI] agents are wandering around in this world and there are all sorts of different objects they can see: walls, doorways, pieces of fruit and other objects they can pick up,” Shane Legg, one of DeepMind’s co-founders, told WIRED. “It’s up to them to learn how to behave in this environment and solve the problems we can give to them.”
Videos of the gaming world released alongside the source code show AI agents (which are represented by large orbs) collecting pieces of fruit, walking through corridors, climbing stairs, exploring space, and navigating their way around a complex digital maze. Open-sourcing the software will allow AI researchers to develop their own levels, test their own AI on the platforms and verify what the Google-owned firm has used it for. (“You could make huge mazes with keys and doors and all sorts of challenges”).
“It’s much closer to the real-world,” Legg continued. “We live in a 3D environment, if we want artificial intelligence systems that can perceive the real-world; to move around in that environment; interact with it; and solve problems, this is a virtual proxy to that.”
One level released by DeepMind is a game of AI laser tag. Within this, there are two teams where the AI competes against bots. “They run around and tag each other and score points,” Legg said. The reason for including bots within training games is so it’s possible for the AI to learn to “co-operate” with them to solve problems. Bots add “richness and complexity” to the environment.
For gamers, the first-person game views may look familiar. This is because the game engine used to create it is from Quake III Arena – released 17 years ago. “The world,” which has been further developed by Deepmind, “is rendered with rich science fiction-style visuals”: a paper published alongside the source code says.
The aged-game engine was purposefully picked. “There are advantages to using relatively old software,” Legg explained. “They were designed to run on relatively low-powered computers and get quite good performance”. The old engine is resource efficient and doesn’t need to be run on a GPU.
Within DeepMind, the Lab, which has been used for a “couple of years” and is still being developed, has been used to advance its own machine learning. The game world has helped teach AI to have a short-term memory (like navigating the London Underground), episodic memory (similar to how dogs and humans remember), image detection and navigation skills.
Legg concluded: “Navigation brings together a number of different challenges: you may want to remember the structure of a maze and then use what you have remembered and learnt about the structure to plan your way through the maze and solve a problem.”
Back in April, Elon Musk’s artificial intelligence firm similarly created a ‘gym’ to let developers train their AI systems on games and challenges.
The open source code provides “environments” in which developers can test their AIs. These environments include 59 Atari games, with Alien, Pong, Asteroids and Pac-Man all making an appearance.
OpenAI also compiles a leaderboard of the most successful systems. Unlike traditional leaderboards, however, these won’t be a list of high scores – instead, success will be based on how versatile the systems are.