Evaluating team behaviors constructed with human-guided machine learning (2015)
Machine learning games such as NERO incorporate adaptive methods such as neuroevolution as an integral part of the gameplay by allowing the player to train teams of autonomous agents for effective behavior in challenging open-ended tasks. However, rigorously evaluating such human-guided machine learning methods and the resulting teams of agent policies can be challenging and is thus rarely done. This paper presents the results and analysis of a large scale online tournament between participants who evolved team agent behaviors and submitted them to be compared with others. An analysis of the teams submitted for the tournament indicates a complex, non-transitive fitness landscape, multiple successful strategies and training approaches, and performance above hand-constructed and random baselines. The tournament and analysis presented provide a practical way to study and improve human-guided machine learning methods and the resulting NPC team behaviors, potentially leading to better games and better game design tools in the future.
View:
PDF
Citation:
To Appear In Proceedings of the IEEE Conference on Computational Intelligence in Games, August 31-July 2 2015.
Bibtex:

Leif Johnson leif [at] cs utexas edu
Igor V. Karpov Masters Alumni ikarpov [at] gmail com
Risto Miikkulainen Faculty risto [at] cs utexas edu
OpenNERO OpenNERO is a general research and education platform for artificial intelligence. The platform is based on a simulatio... 2010

rtNEAT C++ The rtNEAT package contains source code implementing the real-time NeuroEvolution of Augmenting Topologies method. In ad... 2006