Coevolving Strategies for General Game Playing (2007)
The General Game Playing Competition poses a unique challenge for Artificial Intelligence. To be successful, a player must learn to play well in a limited number of example games encoded in first-order logic and then generalize its game play to previously unseen games with entirely different rules. Because good opponents are usually not available, learning algorithms must come up with plausible opponent strategies in order to benchmark performance. One approach to simultaneously learning all player strategies is coevolution. This paper presents a coevolutionary approach using NeuroEvolution of Augmenting Topologies to evolve populations of game state evaluators. This approach is tested on a sample of games from the General Game Playing Competition and shown to be effective: It allows the algorithm designer to minimize the amount of domain knowledge built into the system, which leads to more general game play and allows modeling opponent strategies efficiently. Furthermore, the General Game Playing domain proves to be a powerful tool for developing and testing coevolutionary methods.
In Proceedings of the {IEEE} Symposium on Computational Intelligence and Games, 320-327, Piscataway, NJ, 2007. IEEE.

Erkin Bahceci Ph.D. Alumni erkin [at] cs utexas edu
Igor V. Karpov Masters Alumni ikarpov [at] gmail com
Risto Miikkulainen Faculty risto [at] cs utexas edu
Joseph Reisinger Former Ph.D. Student joeraii [at] cs utexas edu