Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog (2018)
Natural language understanding in robots needs to be robust to a wide-range of both human speakers and human environments. Rather than force humans to use language that robots can understand, robots in human environments should dynamically adapt—continuously learning new language constructions and perceptual concepts as they are used in context. In this work, we present methods for parsing natural language to underlying meanings, and using robotic sensors to create multi-modal models of perceptual concepts. We combine these steps towards language understanding into a holistic agent for jointly improving parsing and perception on a robotic platform through human-robot dialog. We train and evaluate this agent on Amazon Mechanical Turk, then demonstrate it on a robotic platform initialized from conversational data gathered from Mechanical Turk. Our experiments show that improving both parsing and perception components from conversations improves communication quality and human ratings of the agent.
View:
PDF
Citation:
In RSS Workshop on Models and Representations for Natural Human-Robot Communication (MRHRC-18). Robotics: Science and Systems (RSS), June 2018.
Bibtex:

Justin Hart hart [at] cs utexas edu
Yuqian Jiang
Raymond J. Mooney mooney [at] cs utexas edu
Aishwarya Padmakumar aish [at] cs utexas edu
Jivko Sinapov jsinapov [at] cs utexas edu
Peter Stone pstone [at] cs utexas edu
Jesse Thomason thomason DOT jesse AT gmail
Nick Walker nswalker [at] cs uw edu
Harel Yedidsion harel [at] cs utexas edu