{TEXPLORE}: Real-Time Sample-Efficient Reinforcement Learning for Robots (2012)
The use of robots in society could be expanded by using reinforcement learning (RL) to allow robots to learn and adapt to new situations online. RL is a paradigm for learning sequential decision making tasks, usually formulated as a Markov Decision Process (MDP). For an RL algorithm to be practical for robotic control tasks, it must learn in very few samples, while continually taking actions in real-time. In addition, the algorithm must learn efficiently in the face of noise, sensor/actuator delays, and continuous state features. In this article, we present TEXPLORE, the first algorithm to address all of these challenges together. TEXPLORE is a model-based RL method that learns a random forest model of the domain which generalizes dynamics to unseen states. The agent explores states that are promising for the final policy, while ignoring states that do not appear promising. With sample-based planning and a novel parallel architecture, TEXPLORE can select actions continually in real-time whenever necessary. We empirically evaluate the importance of each component of TEXPLORE in isolation and then demonstrate the complete algorithm learning to control the velocity of an autonomous vehicle in real-time.
View:
PDF, PS, HTML
Citation:
Machine Learning, 2012.
Bibtex:

Todd Hester todd [at] cs utexas edu
Peter Stone pstone [at] cs utexas edu