Evolving Multimodal Networks for Multitask Games (2011)
Intelligent opponent behavior helps make video games interesting to human players. Evolutionary computation can discover such behavior, especially when the game consists of a single task. However, multitask domains, in which separate tasks within the domain each have their own dynamics and objectives, can be challenging for evolution. This paper proposes two methods for meeting this challenge by evolving neural networks: 1) Multitask Learning provides a network with distinct outputs per task, thus evolving a separate policy for each task, and 2) Mode Mutation provides a means to evolve new output modes, as well as a way to select which mode to use at each moment. Multitask Learning assumes agents know which task they are currently facing; if such information is available and accurate, this approach works very well, as demonstrated in the Front/Back Ramming game of this paper. In contrast, Mode Mutation discovers an appropriate task division on its own, which may in some cases be even more powerful than a human-speciļ¬ed task division, as shown in the Predator/Prey game of this paper. These results demonstrate the importance of both Multitask Learning and Mode Mutation for learning intelligent behavior in complex games.

[Winner of the Best Paper award at CIG'11]
[An expanded version was published as a journal article here]
View:
PDF
Citation:
In Proceedings of the IEEE Conference on Computational Intelligence and Games (CIG 2011), 102--109, Seoul, South Korea, September 2011. IEEE. (Best Paper Award).
Bibtex:

Presentation:
Slides (PPT)
Risto Miikkulainen Faculty risto [at] cs utexas edu
Jacob Schrum Ph.D. Alumni schrum2 [at] southwestern edu
BREVE Monsters BREVE is a system for designing Artificial Life simulations available at http://spiderlan... 2010