The convergence rate of regularized learning in games: From bandits and uncertainty to optimism and beyond - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2021

The convergence rate of regularized learning in games: From bandits and uncertainty to optimism and beyond

Abstract

In this paper, we examine the convergence rate of a wide range of regularized methods for learning in games. To that end, we propose a unified algorithmic template that we call "follow the generalized leader" (FTGL), and which includes as special cases the canonical "follow the regularized leader" algorithm, its optimistic variants, extra-gradient schemes, and many others. The proposed framework is also sufficiently flexible to account for several different feedback models-from full information to bandit feedback. In this general setting, we show that FTGL algorithms converge locally to strict Nash equilibria at a rate which does not depend on the level of uncertainty faced by the players, but only on the geometry of the regularizer near the equilibrium. In particular, we show that algorithms based on entropic regularization-like the exponential weights algorithm-enjoy a linear convergence rate, while Euclidean projection methods converge to equilibrium in a finite number of iterations, even with bandit feedback.
Fichier principal
Vignette du fichier
StrictRates.pdf (3.76 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03357715 , version 1 (29-09-2021)

Identifiers

  • HAL Id : hal-03357715 , version 1

Cite

Angeliki Giannou, Emmanouil Vasileios Vlatakis-Gkaragkounis, Panayotis Mertikopoulos. The convergence rate of regularized learning in games: From bandits and uncertainty to optimism and beyond. NeurIPS 2021 - 35th International Conference on Neural Information Processing Systems, Dec 2021, Virtual, Unknown Region. pp.1-28. ⟨hal-03357715⟩
149 View
278 Download

Share

Gmail Facebook X LinkedIn More