Online Interactive Neuro-Evolution (2000)
In standard neuro-evolution, a population of networks is evolved in a task, and the network that best solves the task is found. This network is then fixed and used to solve future instances of the problem. Networks evolved in this way do not handle real-time interaction very well. It is hard to evolve a solution ahead of time that can cope effectively with all the possible environments that might arise in the future and with all the possible ways someone may interact with it. This paper proposes evolving feedforward neural networks online to create agents that improve their performance through real-time interaction. This approach is demonstrated in a game world where neural-network controlled individuals play against humans. Through evolution, these individuals learn to react to varying opponents while appropriately taking into account conflicting goals. After initial evaluation offline, the population is allowed to evolve online, and its performance improves considerably. The population not only adapts to novel situations brought about by changing strategies in the opponent and the game layout, but it also improves its performance in situations that it has already seen in offline training. This paper will describe an implementation of online evolution and shows it is a practical method that exceeds the performance of offline evolutions alone.
View:
PDF, PS
Citation:
Neural Processing Letters:29-38, 2000.
Bibtex:

Adrian Agogino Former Collaborator adrian k agogino [at] nasa gov
Risto Miikkulainen Faculty risto [at] cs utexas edu
Kenneth Stanley Postdoctoral Alumni kstanley [at] cs ucf edu