Neuroannealing: Martingale-Driven Optimization for Neural Networks (2013)
Neural networks are effective tools to solve prediction, modeling, and control tasks. However, methods to train neural networks have been less successful on control problems that require the network to model intricately structured regions in state space. This paper presents neuroannealing, a method for training neural network controllers on such problems. Neuroannealing is based on evolutionary annealing, a global optimization method that leverages all available information to search for the global optimum. Because neuroannealing retains all intermediate solutions, it is able to represent the fitness landscape more accurately than traditional generational methods and so finds solutions that require greater network complexity. This hypothesis is tested on two problems with fractured state spaces. Such problems are difficult for other methods such as NEAT because they require relatively deep network topology in order to extract the relevant features of the network inputs. Neuroannealing outperforms NEAT on these problems, supporting the hypothesis. Overall, neuroannealing is a promising approach for training neural networks to solve complex practical problems.
In Proceedings of the 2013 Genetic and Evolutionary Computation Conference (GECCO-2013), 2013. ACM Press.

Alan J. Lockett Ph.D. Alumni alan lockett [at] gmail com
Risto Miikkulainen Faculty risto [at] cs utexas edu