Grounded Semantic Networks for Learning Shared Communication Protocols (2016)
Cooperative multiagent learning poses the challenge of coordinating independent agents. A powerful method to achieve coordination is allowing agents to communicate. We present the Grounded Semantic Network, an approach for learning a task-dependent communication protocol grounded in the observation space and reward function of the task. We show that the grounded semantic network effectively learns a communication protocol that is useful for achieving cooperation between agents. Analyzing the messages transmitted between agents reveals that the agents' policies are highly influenced by the communication received from teammates. Further analysis highlights the limitations of the grounded semantic network, identifying the characteristics of domains that it can and cannot solve.
View:
PDF, HTML
Citation:
In Deep Reinforcement Learning, NIPS Workshop, Barcelona, Spain, December 2016.
Bibtex:

Matthew Hausknecht Former Collaborator mhauskn [at] cs utexas edu
Peter Stone pstone [at] cs utexas edu