Interactively Shaping Agents via Human Reinforcement: The TAMER Framework (2009)
As computational learning agents move into domains that incur real costs (e.g., autonomous driving or financial investment), it will be necessary to learn good policies without numerous high-cost learning trials. One promising approach to reducing sample complexity of learning a task is knowledge transfer from humans to agents. Ideally, methods of transfer should be accessible to anyone with task knowledge, regardless of that person's expertise in programming and AI. This paper focuses on allowing a human trainer to interactively shape an agent's policy via reinforcement signals. Specifically, the paper introduces ``Training an Agent Manually via Evaluative Reinforcement,'' or TAMER, a framework that enables such shaping. Differing from previous approaches to interactive shaping, a TAMER agent models the human's reinforcement and exploits its model by choosing actions expected to be most highly reinforced. Results from two domains demonstrate that lay users can train TAMER agents without defining an environmental reward function (as in an MDP) and indicate that human training within the TAMER framework can reduce sample complexity over autonomous learning algorithms.
View:
PDF, PS, HTML
Citation:
In The Fifth International Conference on Knowledge Capture, September 2009.
Bibtex:

W. Bradley Knox bradknox [at] mit edu
Peter Stone pstone [at] cs utexas edu