Multiagent Learning through Neuroevolution (2012)
Neuroevolution is a promising approach for constructing intelligent agents in many complex tasks such as games, robotics, and decision making. It is also well suited for evolving team behavior for many multiagent tasks. However, new challenges and opportunities emerge in such tasks, including facilitating cooperation through reward sharing and communication, accelerating evolution through social learning, and measuring how good the resulting solutions are. This paper reviews recent progress in these three areas, and suggests avenues for future work.
In J. Liu et al., editors, Advances in Computational Intelligence, LNCS 7311, 24-46, Berlin, Heidelberg:, 2012. Springer.

Eliana Feasley Former Ph.D. Student elie [at] cs utexas edu
Leif Johnson leif [at] cs utexas edu
Igor V. Karpov Ph.D. Student ikarpov [at] gmail com
Risto Miikkulainen Faculty risto [at] cs utexas edu
Padmini Rajagopalan Ph.D. Alumni padmini [at] cs utexas edu
Aditya Rawal Ph.D. Student aditya [at] cs utexas edu
Wesley Tansey Former Collaborator tansey [at] cs utexas edu
ESL This is the C# source code for the experiments with Egalitarian Social Learning (ESL) in a robot foraging domain. The re... 2012

NEAT C++ The NEAT package contains source code implementing the NeuroEvolution of Augmenting Topologies method. The source code i... 2010

rtNEAT C++ The rtNEAT package contains source code implementing the real-time NeuroEvolution of Augmenting Topologies method. In ad... 2006

ESP C++ The ESP package contains the source code for the Enforced Sup-Populations system written in C++. ESP is an extension t... 2000