neural networks research group
areas
people
projects
demos
publications
software/data
Sample-efficient Adversarial Imitation Learning from Observation (2019)
Faraz Torabi
and
Garrett Warnell
and
Peter Stone
Imitation from observation is the framework of learning tasks by observing demonstrated state-only trajectories. Recently, adversarial approaches have achieved significant performance improvements over other methods for imitating complex behaviors. However, these adversarial imitation algorithms often require many demonstration examples and learning iterations to produce a policy that is successful at imitating a demonstrator's behavior.This high sample complexity often prohibits these algorithms from being deployed on physical robots. In this paper, we propose an algorithm that addresses the sample inefficiency problem by utilizing ideas from trajectory centric reinforcement learning algorithms.We test our algorithm and conduct experiments using an imitation task on a physical robot arm and its simulated version in Gazebo and will show the improvement in learning rate and efficiency.
View:
PDF
,
HTML
Bibtex:
@inproceedings{ICML19-torabi, title={Sample-efficient Adversarial Imitation Learning from Observation}, author={Faraz Torabi and Garrett Warnell and Peter Stone}, month={June}, address={Long Beach, California, USA}, url="http://nn.cs.utexas.edu/?ICML19-torabi", year={2019} }
People
Peter Stone
pstone [at] cs utexas edu
Faraz Torabi
faraztrb [at] cs utexas edu
Garrett Warnell
warnellg [at] cs utexas edu
Areas of Interest
Imitation Learning
Machine Learning