Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Journal Articles Annals of Operations Research Year : 2012

Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs

Abstract

This paper is concerned with the links between the Value Iteration algorithm and the Rolling Horizon procedure, for solving problems of stochastic optimal control under the long-run average criterion, in Markov Decision Processes with finite state and action spaces. We review conditions of the literature which imply the geometric convergence of Value It- eration to the optimal value. Aperiodicity is an essential prerequisite for convergence. We prove that the convergence of Value Iteration generally implies that of Rolling Horizon. We also present a modified Rolling Horizon procedure that can be applied to models without analyzing periodicity, and discuss the impact of this transformation on convergence. We il- lustrate with numerous examples the different results. Finally, we discuss rules for stopping Value Iteration or finding the length of a Rolling Horizon. We provide an example which demonstrates the difficulty of the question, disproving in particular a conjectured rule pro- posed by Puterman.

Dates and versions

hal-00862915 , version 1 (17-09-2013)

Identifiers

Cite

Eugenio Della Vecchia, Silvia C. Di Marco, Alain Jean-Marie. Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs. Annals of Operations Research, 2012, 199 (1), pp.193-214. ⟨10.1007/s10479-012-1070-0⟩. ⟨hal-00862915⟩
110 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More