neural networks research group
areas
people
projects
demos
publications
software/data
Temporal Difference and Policy Search Methods for Reinforcement Learning: An Empirical Comparison (2007)
Matthew E. Taylor
and
Shimon Whiteson
and
Peter Stone
Reinforcement learning (RL) methods have become popular in recent years because of their ability to solve complex tasks with minimal feedback. Both genetic algorithms (GAs) and temporal difference (TD) methods have proven effective at solving difficult RL problems, but few rigorous comparisons have been conducted. Thus, no general guidelines describing the methods' relative strengths and weaknesses are available. This paper summarizes a detailed empirical comparison between a GA and a TD method in Keepaway, a standard RL benchmark domain based on robot soccer. The results from this study help isolate the factors critical to the performance of each learning method and yield insights into their general strengths and weaknesses.
View:
PDF
,
PS
,
HTML
Citation:
In
Proceedings of the Twenty-Second Conference on Artificial Intelligence
, 1675-1678, July 2007. Nectar Track.
Bibtex:
@InProceedings{AAAI07-taylor, title={Temporal Difference and Policy Search Methods for Reinforcement Learning: An Empirical Comparison}, author={Matthew E. Taylor and Shimon Whiteson and Peter Stone}, booktitle={Proceedings of the Twenty-Second Conference on Artificial Intelligence}, month={July}, pages={1675-1678}, note={Nectar Track}, url="http://nn.cs.utexas.edu/?AAAI07-taylor", year={2007} }
People
Peter Stone
pstone [at] cs utexas edu
Matthew Taylor
taylorm [at] eecs wsu edu
Shimon Whiteson
Former Collaborator
s a whiteson [at] uva nl
Areas of Interest
Reinforcement Learning
Other Areas