Robot Behavioral Exploration and Multimodal Perception using POMDPs (2017)
Shiqi Zhang and Jivko Sinapov and Suhua Wei and Peter Stone
Service robots are increasingly present in everyday environments, such as homes, offices, airports and hospitals. A common task for such robots involves retrieving an object for a user. Consider the request, "Robot, please fetch me the red empty bottle." A key problem for the robot consists of deciding whether a particular candidate object matches the properties in the query. For certain words (e.g., heavy, soft, etc.) visual classification of the object is insufficient as the robot would need to perform an action (e.g., lift the object) to determine whether it is empty or not. Furthermore, the robot would need to decide which actions (possibly out of many) to perform on an object, i.e., it would need to generate a behavioral policy for a given request. Although multimodal perception and POMDP-based object exploration have been studied previously, to the best of our knowledge, there is no research that integrates both in robotics. In this work, given queries about object properties, we dynamically construct POMDPs using a data set collected from a real robot. Experiments on exploring new objects show that our POMDP-based object exploration strategy significantly reduces the overall cost of exploration actions without hurting accuracy, compared to a baseline strategy that uses a predefined sequence of actions.
View:
PDF, HTML
Citation:
In Proceedings of 2017 AAAI Spring Symposium on Interactive Multi-Sensory Perception for Embodied Agents, Stanford, CA, March 2017.
Bibtex:

Jivko Sinapov jsinapov [at] cs utexas edu
Peter Stone pstone [at] cs utexas edu
Shiqi Zhang szhang [at] cs utexas edu