Temporal Difference and Policy Search Methods for Reinforcement Learning: An Empirical Comparison (2007)
Reinforcement learning (RL) methods have become popular in recent years because of their ability to solve complex tasks with minimal feedback. Both genetic algorithms (GAs) and temporal difference (TD) methods have proven effective at solving difficult RL problems, but few rigorous comparisons have been conducted. Thus, no general guidelines describing the methods' relative strengths and weaknesses are available. This paper summarizes a detailed empirical comparison between a GA and a TD method in Keepaway, a standard RL benchmark domain based on robot soccer. The results from this study help isolate the factors critical to the performance of each learning method and yield insights into their general strengths and weaknesses.
View:
PDF, PS, HTML
Citation:
In Proceedings of the Twenty-Second Conference on Artificial Intelligence, 1675-1678, July 2007. Nectar Track.
Bibtex:

Peter Stone pstone [at] cs utexas edu
Matthew Taylor taylorm [at] eecs wsu edu
Shimon Whiteson Former Collaborator s a whiteson [at] uva nl