neural networks research group
areas
people
projects
demos
publications
software/data
Data-Efficient Policy Evaluation Through Behavior Policy Search (2017)
Josiah Hanna
and Philip Thomas and
Peter Stone
and Scott Niekum
We consider the task of evaluating a policy for a Markov decision process (MDP). The standard unbiased technique for evaluating a policy is to deploy the policy and observe its performance. We show that the data collected from deploying a different policy, commonly called the behavior policy, can be used to produce unbiased estimates with lower mean squared error than this standard technique. We derive an analytic expression for the optimal behavior policy---the behavior policy that minimizes the mean squared error of the resulting estimates. Because this expression depends on terms that are unknown in practice, we propose a novel policy evaluation sub-problem, behavior policy search: searching for a behavior policy that reduces mean squared error. We present a behavior policy search algorithm and empirically demonstrate its effectiveness in lowering the mean squared error of policy performance estimates.
View:
PDF
,
HTML
Citation:
In
Proceedings of the 34th International Conference on Machine Learning (ICML)
, Sydney, Australia, August 2017.
Bibtex:
@inproceedings{ICML17-Hanna, title={Data-Efficient Policy Evaluation Through Behavior Policy Search}, author={Josiah Hanna and Philip Thomas and Peter Stone and Scott Niekum}, booktitle={Proceedings of the 34th International Conference on Machine Learning (ICML)}, month={August}, address={Sydney, Australia}, url="http://nn.cs.utexas.edu/?hanna:icml17", year={2017} }
Presentation:
Slides (PDF)
People
Josiah Hanna
jphanna [at] cs utexas edu
Peter Stone
pstone [at] cs utexas edu
Areas of Interest
Reinforcement Learning