The Enforced Subpopulations (ESP) method can be extended to evolving multiple networks simultaneously, and applied to multi-agent problem solving tasks. In the prey capture domain, multiple predators evolved to perform different and compatible roles, so that the whole team of predators efficiently captured the prey. Remarkably, multi-agent evolution was more efficient than evolving a central controller for the task. Also, the predators did not need to communicate or even know the other predators' locations; role-based cooperation was highly efficient in this task. Communication would result in more general, but less effective, behavior. These results suggest that multi-agent neuroevolution is a promising approach for complex real-world tasks.
We are currently working on applying it to other multi-agent games.
One such multi-agent task to which neuroevolution has been successfully applied is the domain of robot soccer.
Here is a summary of work that describes how three different learning methods compare in two versions of the robot soccer keepaway domain.