Artificial intelligence machine out-plays gamers in video game
When it comes to gaming, Carnegie Mellon students are usually the ones beating the computer. But this time, the computer beat the students.
Carnegie Mellon University computer science students, Devendra Chaplot and Guillaume Lample, recently made an artificial intelligence (AI) agent in the video game Doom that outplays computer-generated agents and human gamers. They accomplished this by applying deep-learning techniques that taught their agent, Arnold, to manipulate the game’s 3-D design.
“The work is purely a result of our passion for artificial intelligence and video games,” Chaplot said. “Games have been a testbed for advancement of AI since decades, like Chess, Poker, 2-D Atari Games, Go, 3D FPS Games, and this development can be viewed as a very small step towards creating a general Artificial Intelligence.”
Chaplot and Lample’s research is a major feat since players are forced to make their moves while only seeing a part of the game on the screen. This research happened after Google’s DeepMind came out. Google also used deep-learning techniques to beat renowned gamer Go at the 2-D Atari 2600 video games. A key difference between these two projects is that Atari allows players to see the whole playing field, while Doom is much more limited.
“The fact that their bot could actually compete with average human beings is impressive,” associate professor of machine learning Ruslan Salakhutdinov said in a university press release.
Chaplot and Lample’s work is also receiving a lot of recognition since putting their research on the internet, submitting it to an AI conference, and posting videos of their agent’s game on YouTube. In three days, their videos got over 100,000 views. Also, Chaplot and Lample’s agent was awarded second place in the Visual Doom AI competition where AI agents participate in deathmatches against each other.
The pair’s feat is important to computer science because of the many difficulties the researchers had to overcome for the computer agent to beat humans and the game-generated agents. Since humans are better at tracking and avoiding enemies in the game’s 3D world, Chaplot believes that gives them an advantage over a computer. Also, the game’s agents have maps and other information that help them cheat the system.
To make up for these discrepancies, Chaplot and Lample had their agent figure out his moves by only seeing what’s on the screen, like human gamers. This was achieved by using certain deep learning processes based on neural networks. As the player goes through the game, the system uses Deep Q-Network, but when it notices an enemy, the agent uses a Deep Recurrent Q-Network that has long short-term memory that watches what the enemy does and helps figure out where the agent should shoot.
Although Chaplot and Lample have already put an immense amount of work into their current model, they still have a lot they want to do with AI technology.
“We would like to extend this work to improve our AI agent to tackle more complex games,” Chaplot said. “In the future, the deep reinforcement learning techniques we used to teach Arnold to play a virtual game might help self-driving cars operate safely on real-world streets and train robots to do a wide variety of tasks to help people.”