Coordination through Mutual Notification in Cooperative Multiagent Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Rapport Année : 2004

Coordination through Mutual Notification in Cooperative Multiagent Reinforcement Learning

Daniel Szer
  • Fonction : Auteur
  • PersonId : 830433

Résumé

We present a new algorithm for cooperative reinforcement learning in multiagent systems. Our main concern is the correct coordination between the members of the team: We seek to obtain an optimal solution for the team as a whole while keeping the learning as much decentralized as possible. We furthermore consider autonomous and independently learning agents that do not store any explicit information about their teammates' behavior. Reward functions may be different for each agent and coordination between agents occurs through communication, namely the mutual notification algorithm. We define the learning problem as a decentralized MDP, we then give an optimality criterion, and proove the convergence of the algorithm for deterministic environments. Finally we study the convergence properties and communication overhead on two small examples.
Fichier non déposé

Dates et versions

inria-00100215 , version 1 (26-09-2006)

Identifiants

  • HAL Id : inria-00100215 , version 1

Citer

Daniel Szer, François Charpillet. Coordination through Mutual Notification in Cooperative Multiagent Reinforcement Learning. [Intern report] A04-R-051 || szer04a, 2004, 8 p. ⟨inria-00100215⟩
161 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More