neural networks research group
areas
people
projects
demos
publications
software/data
The Utility of Temporal Abstraction in Reinforcement Learning (2008)
Nicholas K. Jong
and
Todd Hester
and
Peter Stone
The hierarchical structure of real-world problems has motivated extensive research into temporal abstractions for reinforcement learning, but precisely how these abstractions allow agents to improve their learning performance is not well understood. This paper investigates the connection between temporal abstraction and an agent's exploration policy, which determines how the agent's performance improves over time. Experimental results with standard methods for incorporating temporal abstractions show that these methods benefit learning only in limited contexts. The primary contribution of this paper is a clearer understanding of how hierarchical decompositions interact with reinforcement learning algorithms, with important consequences for the manual design or automatic discovery of action hierarchies.
View:
PDF
,
PS
,
HTML
Citation:
In
The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems
, May 2008.
Bibtex:
@InProceedings{AAMAS08-jong, title={The Utility of Temporal Abstraction in Reinforcement Learning}, author={Nicholas K. Jong and Todd Hester and Peter Stone}, booktitle={The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems}, month={May}, url="http://nn.cs.utexas.edu/?AAMAS08-jong", year={2008} }
People
Todd Hester
todd [at] cs utexas edu
Nicholas Jong
nickjong [at] me com
Peter Stone
pstone [at] cs utexas edu
Areas of Interest
Machine Learning
Planning