Generalized conditional gradient and learning in potential mean field games
Résumé
We investigate the resolution of second-order, potential, and monotone mean field games with the generalized conditional gradient algorithm, an extension of the Frank-Wolfe algorithm. We show that the method is equivalent to the fictitious play method. We establish rates of convergence for the optimality gap, the exploitability, and the distances of the variables to the unique solution of the mean field game, for various choices of stepsizes. In particular, we show that linear convergence can be achieved when the stepsizes are computed by linesearch.
Origine | Fichiers produits par l'(les) auteur(s) |
---|