Improved and Generalized Upper Bounds on the Complexity of Policy Iteration - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2013

Improved and Generalized Upper Bounds on the Complexity of Policy Iteration

Bruno Scherrer

Résumé

Given a Markov Decision Process (MDP) with $n$ states and $m$ actions per state, we study the number of iterations needed by Policy Iteration (PI) algorithms to converge. We consider two variations of PI: Howard's PI that changes all the actions with a positive advantage, and Simplex-PI that only changes one action with maximal advantage. We show that Howard's PI terminates after at most $ n(m-1) \left \lceil \frac{1}{1-\gamma}\log \left( \frac{1}{1-\gamma} \right) \right \rceil $ iterations, improving by a factor $O(\log n)$ a result by Hansen et al. (2013), while Simplex-PI terminates after at most $ n(m-1) \left\lceil \frac{n}{1-\gamma} \log \left( \frac{n}{1-\gamma} \right)\right\rceil $ iterations, improving by a factor 2 a result by Ye (2011). Under some structural assumptions of the MDP, we then consider bounds that are independent of the discount factor~$\gamma$. When the MDP is deterministic, we show that Simplex-PI terminates after at most $ 2 n^2 m (m-1) \lceil 2 (n-1) \log n \rceil \lceil 2 n \log n \rceil = O(n^4 m^2 \log^2 n) $ iterations, improving by a factor $O(n)$ a bound obtained by Post and Ye (2012). We generalize this result to stochastic MDPs: given a measure of the maximal transient time $\tau_t$ and the maximal time $\tau_r$ to revisit states in recurrent classes under all policies, we show that Simplex-PI terminates after at most $ n^2 m (m-1) \left(\lceil \tau_r \log (n \tau_r) \rceil +\lceil \tau_r \log (n \tau_t) \rceil \right) \lceil {\tau_t} \log (n (\tau_t+1)) \rceil = \tilde O ( n^2 \tau_t \tau_r m^2 ) $ iterations. We explain why similar results seem hard to derive for Howard's PI. Finally, under the additional (restrictive) assumption that the MDP is weakly-communicating, we show that Simplex-PI and Howard's PI terminate after at most $n(m-1) \left( \lceil \tau_t \log n \tau_t \rceil + \lceil \tau_r \log n \tau_r \rceil \right) =\tilde O(nm (\tau_t+\tau_r))$ iterations.
Fichier principal
Vignette du fichier
report.pdf (276.46 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-00829532 , version 1 (03-06-2013)
hal-00829532 , version 2 (06-06-2013)
hal-00829532 , version 3 (24-06-2013)
hal-00829532 , version 4 (10-02-2016)

Identifiants

Citer

Bruno Scherrer. Improved and Generalized Upper Bounds on the Complexity of Policy Iteration. 2013. ⟨hal-00829532v2⟩
390 Consultations
688 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More