Evolving Cooperation in Multiagent Systems (2007)
Author: Chern Yong
In tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. Using the Multi-agent ESP method, such agents can be effectively evolved in separate networks, rewarded together as a team. This demo shows two examples of evolved behavior in the prey-capture task in a toroidal grid world.

In the role-based animation, the predator agents (red, green, and blue squares) do not sense each other directly. Instead, they learn to coordinate through stigmergy, i.e. through changes in the environment that result from their actions. The red agent has learned the role of a blocker, waiting in the prey s (shown as X) path. The other two are chasers, driving the prey towards the blocker until the prey has nowhere to run (remember the world is a toroid). This kind of role-based cooperation is easier to learn, more robust, and more effective than communication-based cooperation in this task. The team learns behavior similar to a well-trained soccer team, where the players know what to expect from their teammates, making direct communication unnecessary.

In the communication-based animation, the predators broadcast their locations to all other predators; their coordination is therefore based on communication. They predators all first chase the prey vertically, from different directions, forcing it to flee horizontally in the end. At that point, the red agent assumes the behavior of the blocker and the other two chase the prey towards it until it is caught between them (the world wraps around at that point). In this typical behavior of communicating agents, the team members use different strategies at different times. The behavior is more flexible, but harder to learn, and not as robust nor as effective. It resembles play in pickup soccer, where the players have to constantly observe what their teammates are doing and adapt to it.

The conclusion is that role-based cooperation is a surprisingly effective approach in certain multi-agent domains like the prey capture.

Chern Han Yong Masters Alumni cherny [at] nus edu sg
Risto Miikkulainen Faculty risto [at] cs utexas edu
Coevolution of Role-Based Cooperation in Multi-Agent Systems Chern Han Yong and Risto Miikkulainen Technical Report AI07-338, Department of Computer Sciences, The University of Texas at Austin, 2007. 2007

IJCNN-2013 Tutorial on Evolution of Neural Networks Risto Miikkulainen To Appear In 2013. Tutorial slides.. 2013

Coevolution of Role-Based Cooperation in Multi-Agent Systems Chern Han Yong and Risto Miikkulainen IEEE Transactions on Autonomous Mental Development, 1:170--186, 2010. 2010

Multiagent Learning through Neuroevolution Risto Miikkulainen, Eliana Feasley, Leif Johnson, Igor Karpov, Padmini Rajagopalan, Aditya Rawal, an... In J. Liu et al., editors, Advances in Computational Intelligence, LNCS 7311, 24-46, Berlin, ... 2012

Cooperative Coevolution of Multi-Agent Systems Chern Han Yong Technical Report HR-00-01, Department of Computer Sciences, The University of Texas at Austin, 2000. 2000

ESP C++ The ESP package contains the source code for the Enforced Sup-Populations system written in C++. ESP is an extension t... 2000