Evolving Neural Networks for Fractured Domains (2008)
Evolution of neural networks, or neuroevolution, bas been successful on many low-level control problems such as pole balancing, vehicle control, and collision warning. However, high-level strategy problems that require the integration of multiple sub-behaviors have remained difficult for neuroevolution to solve. This paper proposes the hypothesis that such problems are difficult because they are fractured: the correct action varies discontinuously as the agent moves from state to state. This hypothesis is evaluated on several examples of fractured high-level reinforcement learning domains. Standard neuroevolution methods such as NEAT indeed have difficulty solving them. However, a modification of NEAT that uses radial basis function (RBF) nodes to make precise local mutations to network output is able to do much better. These results provide a better understanding of the different types of reinforcement learning problems and the limitations of current neuroevolution methods. Thus, they lay the groundwork for creating the next generation of neuroevolution algorithms that can learn strategic high-level behavior in fractured domains.
View:
PDF
Citation:
In Proceedings of the Genetic and Evolutionary Computation Conference, 1405-1412, July 2008.
Bibtex:

Nate Kohl Ph.D. Alumni nate [at] natekohl net
Risto Miikkulainen Faculty risto [at] cs utexas edu