Policy iteration algorithm for zero-sum multichain stochastic games with mean payoff and perfect information
Résumé
We consider zero-sum stochastic games with finite state and action spaces, perfect information, mean payoff criteria, without any irreducibility assumption on the Markov chains associated to strategies (multichain games). The value of such a game can be characterized by a system of nonlinear equations, involving the mean payoff vector and an auxiliary vector (relative value or bias). We develop here a policy iteration algorithm for zero-sum stochastic games with mean payoff, following an idea of two of the authors (Cochet-Terrasson and Gaubert, C. R. Math. Acad. Sci. Paris, 2006). The algorithm relies on a notion of nonlinear spectral projection (Akian and Gaubert, Nonlinear Analysis TMA, 2003), which is analogous to the notion of reduction of super-harmonic functions in linear potential theory. To avoid cycling, at each degenerate iteration (in which the mean payoff vector is not improved), the new relative value is obtained by reducing the earlier one. We show that the sequence of values and relative values satisfies a lexicographical monotonicity property, which implies that the algorithm does terminate. We illustrate the algorithm by a mean-payoff version of Richman games (stochastic tug-of-war or discrete infinity Laplacian type equation), in which degenerate iterations are frequent. We report numerical experiments on large scale instances, arising from the latter games, as well as from monotone discretizations of a mean-payoff pursuit-evasion deterministic differential game.