Optimal Use Of Verbal Instructions For Multi-Robot Human Navigation Guidance (2019)
Harel Yedidsion, Jacqueline Deans, Connor Sheehan, Mahathi Chillara, Justin Hart, Peter Stone, and Raymond J. Mooney
Efficiently guiding humans in indoor environments is a challenging open problem. Due to recent advances in mobile robotics and natural language processing, it has recently become possible to consider doing so with the help of mobile, verbally communicating robots. In the past, stationary verbal robots have been used for this purpose at Microsoft Research, and mobile non-verbal robots have been used at UT Austin in their multi-robot human guidance system. This paper extends that mobile multi-robot human guidance research by adding the element of natural language instructions, which are dynamically generated based on the robots’ path planner, and by implementing and testing the system on real robots. Generating natural language instructions from the robots’ plan opens up a variety of optimization opportunities such as deciding where to place the robots, where to lead humans, and where to verbally instruct them. We present experimental results of the full multi-robot human guidance system and show that it is more effective than two baseline systems: one which only provides humans with verbal instructions, and another which only uses a single robot to lead users to their destinations.
View:
PDF
Citation:
In Proceedings of the Eleventh International Conference on Social Robotics, 133-143, 2019. Springer.
Bibtex:

Presentation:
Slides (PDF)Video
Justin Hart hart [at] cs utexas edu
Raymond J. Mooney mooney [at] cs utexas edu
Peter Stone pstone [at] cs utexas edu
Harel Yedidsion harel [at] cs utexas edu