General Video Game Playing as a Benchmark for Human-Competitive AI (2015)
Text-based conversational Turing test benchmarks for artificial intelligence have significant limitations: It is possible to do well by only faking intelligence, thereby subverting the test's intent. An ideal replacement test would facilitate and focus AI research, be easy to implement and automate, and ensure that human-competitive performance implied a powerful and general AI. The idea in this paper is that general video game playing is one such promising candidate for an improved human-level AI benchmark. To pass such a test, a computer program must succeed in efficiently completing an unannounced and diverse suite of video games, interacting with the game only through simulated versions of the same information streams available to a human player; the test is easy to automate but difficult to exploit, and can stress nearly all aspects of human intelligence through strategic sampling of the vast library of existing video games. In this way, general video game playing may provide the basis for a simple but effective benchmark competition for human-level AI.
View:
PDF
Citation:
In AAAI-15 Workshop on Beyond the Turing Test, 2015.
Bibtex:

Joel Lehman Postdoctoral Alumni joel [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu