Penalty-Regulated Dynamics and Robust Learning Procedures in Games - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Mathematics of Operations Research Année : 2015

Penalty-Regulated Dynamics and Robust Learning Procedures in Games

Résumé

Starting from a heuristic learning scheme for N-person games, we derive a new class of continuous-time learning dynamics consisting of a replicator-like drift adjusted by a penalty term that renders the boundary of the game's strategy space repelling. These penalty-regulated dynamics are equivalent to players keeping an exponentially discounted aggregate of their ongoing payoffs and then using a smooth best response to pick an action based on these performance scores. Owing to this inherent duality, the proposed dynamics satisfy a variant of the folk theorem of evolutionary game theory and they converge to (arbitrarily precise) approximations of Nash equilibria in potential games. Motivated by applications to traffic engineering, we exploit this duality further to design a discrete-time, payoff-based learning algorithm which retains these convergence properties and only requires players to observe their in-game payoffs: moreover, the algorithm remains robust in the presence of stochastic perturbations and observation errors, and it does not require any synchronization between players.
Fichier principal
Vignette du fichier
1303.2270v2.pdf (2.33 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01235243 , version 1 (29-11-2015)

Identifiants

Citer

Pierre Coucheney, Bruno Gaujal, Panayotis Mertikopoulos. Penalty-Regulated Dynamics and Robust Learning Procedures in Games. Mathematics of Operations Research, 2015, 40 (3), pp.611-633. ⟨10.1287/moor.2014.0687⟩. ⟨hal-01235243⟩
176 Consultations
310 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More