Improving Coordination with Communication in Multiagent Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2004

Improving Coordination with Communication in Multiagent Reinforcement Learning

Daniel Szer
  • Function : Author
  • PersonId : 830433

Abstract

In the following paper we present a new algorithm for cooperative reinforcement learning in multiagent systems. We consider autonomous and independently learning agents, and we seek to obtain an optimal solution for the team as a whole while keeping the learning as much decentralized as possible. Coordination between agents occurs through communication, namely the mutual notification algorithm. We define the learning problem as a decentralized process, using the MDP formalism. We then give an optimality criterion and prove the convergence of the algorithm for deterministic environments. We introduce variable and hierarchical communication strategies which considerably reduce the number of communications. Finally we study the convergence properties and communication overhead on a small example.
No file

Dates and versions

inria-00100165 , version 1 (26-09-2006)

Identifiers

  • HAL Id : inria-00100165 , version 1

Cite

Daniel Szer, François Charpillet. Improving Coordination with Communication in Multiagent Reinforcement Learning. 16th IEEE International Conference on Tools with Artificial Intelligence - ICTAI'04, 2004, Boca Raton, USA, 5 p. ⟨inria-00100165⟩
155 View
0 Download

Share

Gmail Facebook Twitter LinkedIn More