Robust Non-Linear Control through Neuroevolution (2003)
Many complex control problems require sophisticated solutions that are not amenable to traditional controller design. Not only is it difficult to model real world systems, but often it is unclear what kind of behavior is required to solve the task. Reinforcement learning approaches have made progress in such problems, but have so far not scaled well. Neuroevolution, has improved upon conventional reinforcement learning, but has still not been successful in full-scale, non-linear control problems. This dissertation develops a methodology for solving real world control tasks consisting of three components: (1) an efficient neuroevolution algorithm that solves difficult non-linear control tasks by coevolving neurons, (2) an incremental evolution method to scale the algorithm to the most challenging tasks, and (3) a technique for making controllers robust so that they can transfer from simulation to the real world. The method is faster than other approaches on a set of difficult learning benchmarks, and is used in two full-scale control tasks demonstrating its applicability to real world problems.
View:
PDF, PS
Citation:
PhD Thesis, Department of Computer Sciences, The University of Texas at Austin, 2003.
Bibtex:

Faustino Gomez Postdoctoral Alumni tino [at] idsia ch
Risto Miikkulainen Faculty risto [at] cs utexas edu
Finless Rocket ControlFaustino Gomez2003
ESP C++ The ESP package contains the source code for the Enforced Sup-Populations system written in C++. ESP is an extension t... 2000