Online Kernel Selection for Bayesian Reinforcement Learning (2008)
Kernel-based Bayesian methods for Reinforcement Learning (RL) such as Gaussian Process Temporal Difference (GPTD) are particularly promising because they rigorously treat uncertainty in the value function and make it easy to specify prior knowledge. However, the choice of prior distribution significantly affects the empirical performance of the learning agent, and little work has been done extending existing methods for prior model selection to the online setting. This paper develops Replacing-Kernel RL, an online model selection method for GPTD using sequential Monte-Carlo methods. Replacing-Kernel RL is compared to standard GPTD and tile-coding on several RL domains, and is shown to yield significantly better asymptotic performance for many different kernel families. Furthermore, the resulting kernels capture an intuitively useful notion of prior state covariance that may nevertheless be difficult to capture manually.
View:
PDF, PS, HTML
Citation:
In Proceedings of the Twenty-Fifth International Conference on Machine Learning, July 2008.
Bibtex:

Risto Miikkulainen Faculty risto [at] cs utexas edu
Joseph Reisinger Former Ph.D. Student joeraii [at] cs utexas edu
Peter Stone pstone [at] cs utexas edu