Improving Grounded Natural Language Understanding through Human-Robot Dialog (2019)
Natural language understanding for robotics can require substantial domain- and platform-specific engineering. For example, for mobile robots to pick-and-place objects in an environment to satisfy human commands, we can specify the language humans use to issue such commands, and connect concept words like red can to physical object properties. One way to alleviate this engineering for a new domain is to enable robots in human environments to adapt dynamically -- continually learning new language constructions and perceptual concepts. In this work, we present an end-to-end pipeline for translating natural language commands to discrete robot actions, and use clarification dialogs to jointly improve language parsing and concept grounding. We train and evaluate this agent in a virtual setting on Amazon Mechanical Turk, and we transfer the learned agent to a physical robot platform to demonstrate it in the real world.
View:
PDF
Citation:
In IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 2019.
Bibtex:

Justin Hart hart [at] cs utexas edu
Yuqian Jiang
Raymond J. Mooney mooney [at] cs utexas edu
Aishwarya Padmakumar aish [at] cs utexas edu
Jivko Sinapov jsinapov [at] cs utexas edu
Peter Stone pstone [at] cs utexas edu
Jesse Thomason thomason DOT jesse AT gmail
Nick Walker nswalker [at] cs uw edu
Harel Yedidsion harel [at] cs utexas edu