neural networks research group
areas
people
projects
demos
publications
software/data
Evaluating Modular Neuroevolution in Robotic Keepaway Soccer (2012)
Anand Subramoney
Keepaway is a simpler subtask of robot soccer where three 'keepers' attempt to keep possession of the ball while a 'taker' tries to steal it from them. This is a less complex task than full robot soccer, and lends itself well as a testbed for multi-agent systems. This thesis does a comprehensive evaluation of various learning methods using neuroevolution with Enforced Sub-Populations (ESP) with the robocup soccer simulator. Both single and multi-component ESP are evaluated using various learning methods on homo- geneous and heterogeneous teams of agents. In particular, the effectiveness of modularity and task decomposition for evolving keepaway teams is evalu- ated. It is shown that in the robocup soccer simulator, homogeneous agents controlled by monolithic networks perform the best. More complex learning approaches like layered learning, concurrent layered learning and co-evolution decrease the performance as does making the agents heterogeneous. The re- sults are also compared with previous results in the keepaway domain.
View:
PDF
Citation:
Masters Thesis, Department of Computer Science, The University of Texas at Austin, Austin, TX, 2012. 54 pages.
Bibtex:
@mastersthesis{subramoney:mastersthesis12, title={Evaluating Modular Neuroevolution in Robotic Keepaway Soccer}, author={Anand Subramoney}, school={Department of Computer Science, The University of Texas at Austin}, address={Austin, TX}, pages={54 pages}, url="http://nn.cs.utexas.edu/?subramoney:ms12", year={2012} }
People
Anand Subramoney
Masters Alumni
anands [at] cs utexas edu
Projects
Learning Strategic Behavior in Sequential Decision Tasks
2009 - 2014
Areas of Interest
Evolutionary Computation
Neuroevolution
Robotics
Artificial Life
Game Playing