Efficient Reinforcement Learning Through Evolving Neural Network Topologies (2002)
Neuroevolution is currently the strongest method on the pole-balancing benchmark reinforcement learning tasks. Although earlier studies suggested that there was an advantage in evolving the network topology as well as connection weights, the leading neuroevolution systems evolve fixed networks. Whether evolving structure can improve performance is an open question. In this article, we introduce such a system, NeuroEvolution of Augmenting Topologies (NEAT). We show that when structure is evolved (1) with a principled method of crossover, (2) by protecting structural innovation, and (3) through incremental growth from minimal structure, learning is significantly faster and stronger than with the best fixed-topology methods. NEAT also shows that it is possible to evolve populations of increasingly large genomes, achieving highly complex solutions that would otherwise be difficult to optimize.

[ Winner of the GECCO-2002 Best Paper Award in Genetic Algorithms ]

View:
PDF, PS
Citation:
In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2002), 9, San Francisco, 2002. Morgan Kaufmann.
Bibtex:

Risto Miikkulainen Faculty risto [at] cs utexas edu
Kenneth Stanley Postdoctoral Alumni kstanley [at] cs ucf edu
NEAT C++ The NEAT package contains source code implementing the NeuroEvolution of Augmenting Topologies method. The source code i... 2010

NEAT C# The SharpNEAT package contains C# source code for the NeuroEvolution of Augmenting Topologies method (see the original <... 2003

NEAT Matlab The Matlab NEAT package contains Matlab source code for the NeuroEvolution of Augmenting Topologies method (see the orig... 2003

NEAT C++ for Microsoft Windows The Windows NEAT package contains C++ source code for the NeuroEvolution of Augmenting Topologies method (see the origin... 2002

NEAT Java (JNEAT) The JNEAT package contains Java source code for the NeuroEvolution of Augmenting Topologies method (see the original 2002