Playing First-Person Perspective Games with Deep Reinforcement Learning Using the State-of-the-Art Game-AI Research Platforms

Document Type

Book Chapter

Source of Publication

Studies in Computational Intelligence

Publication Date



Computer games have become one of the most interesting and dynamic research areas of artificial intelligence as computer games are the best testbeds to evaluate and test the theoretical ideas in artificial intelligence before applying them in real-world. The enhancement in computing power, advancement in machine learning particularly deep reinforcement learning, and the evolution of neural networks are allowing the autonomous game agents to perform substantially well that often leave behind human beings by using only the screen raw pixels for making their actions or decisions. In this chapter, we use Deep reinforcement learning in the form of Deep Q-learning under its two variants Deep Q-Network (DQN) and Deep Recurrent Q-Learning Network (DRQN) to control agents in playing the two famous computer games i.e. Doom and Minecraft. We present how to build an implementation of a testbed for such state-of-the-art methods using the ViZDoom, Gym-Minecraft and Microsoft's Malmo platforms. Initially, we present our results on a simplified game scenario(s) from Doom in predicting the enemy positions (game features) with the difference in the performance of the DQN and DRQN in both fully observable Markovian decision process (FOMDP) and partially observable Markovian decision process (POMDP) and claim that the DQN performs better at predicting the enemy positions. Finally, we present results on another game scenario(s) from Minecraft to test and confirm the performance of DRQN in POMDP where unlike other existing works, our proposed architectures outperform the built-in AI agents and human players in predicting the game features with enhanced accuracy.


Springer Nature




Computer Sciences

Scopus ID


Indexed in Scopus


Open Access