Improving Coordination with Communication in Multiagent Reinforcement Learning
Abstract
In the following paper we present a new algorithm for cooperative reinforcement learning in multiagent systems. We consider autonomous and independently learning agents, and we seek to obtain an optimal solution for the team as a whole while keeping the learning as much decentralized as possible. Coordination between agents occurs through communication, namely the mutual notification algorithm. We define the learning problem as a decentralized process, using the MDP formalism. We then give an optimality criterion and prove the convergence of the algorithm for deterministic environments. We introduce variable and hierarchical communication strategies which considerably reduce the number of communications. Finally we study the convergence properties and communication overhead on a small example.