neural networks research group
areas
people
projects
demos
publications
software/data
Towards Learning to Ignore Irrelevant State Variables (2004)
Nicholas K. Jong
and
Peter Stone
Hierarchical methods have attracted much recent attention as a means for scaling reinforcement learning algorithms to increasingly complex, real-world tasks. These methods provide two important kinds of abstraction that facilitate learning. First, hierarchies organize actions into temporally abstract high-level tasks. Second, they facilitate task dependent state abstractions that allow each high-level task to restrict attention only to relevant state variables. In most approaches to date, the user must supply suitable task decompositions and state abstractions to the learner. How to discover these hierarchies automatically remains a challenging open problem. As a first step towards solving this problem, we introduce a general method for determining the validity of potential state abstractions that might form the basis of reusable tasks. We build a probabilistic model of the underlying Markov decision problem and then statistically test the applicability of the state abstraction. We demonstrate the ability of our procedure to discriminate among safe and unsafe state abstractions in the familiar Taxi domain.
View:
PDF
,
PS
,
HTML
Citation:
In
The AAAI-2004 Workshop on Learning and Planning in Markov Processes -- Advances and Challenges
, 2004.
Bibtex:
@inproceedings{jong:aaai04ws, title={Towards Learning to Ignore Irrelevant State Variables}, author={Nicholas K. Jong and Peter Stone}, booktitle={The AAAI-2004 Workshop on Learning and Planning in Markov Processes -- Advances and Challenges}, url="http://nn.cs.utexas.edu/?jong:aaai04ws", year={2004} }
People
Nicholas Jong
nickjong [at] me com
Peter Stone
pstone [at] cs utexas edu
Areas of Interest
Markov Decision Processes
Reinforcement Learning