Behavior Transfer for Value-Function-Based Reinforcement Learning (2005)
Temporal difference (TD) learning methods have become popular reinforcement learning techniques in recent years. TD methods have had some experimental successes and have been shown to exhibit some desirable properties in theory, but have often been found very slow in practice. A key feature of TD methods is that they represent policies in terms of value functions. In this paper we introduce emphbehavior transfer, a novel approach to speeding up TD learning by transferring the learned value function from one task to a second related task. We present experimental results showing that autonomous learners are able to learn one multiagent task and then use behavior transfer to markedly reduce the total training time for a more complex task.
View:
PDF, PS, HTML
Citation:
In Frank Dignum and Virginia Dignum and Sven Koenig and Sarit Kraus and Munindar P. Singh and Michael Wooldridge, editors, The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, 53-59, New York, NY, July 2005. ACM Press.
Bibtex:

Peter Stone pstone [at] cs utexas edu
Matthew Taylor taylorm [at] eecs wsu edu