Performance Bounds in $L_p$ norm for Approximate Value Iteration
Résumé
Approximate Value Iteration (AVI) is a method for solving large Markov Decision Problems by approximating the optimal value function with a sequence of value function representations $V_n$ processed according to the iterations $V{n+1} = \mathcal{ATV}_n$ where $\mathcal{T}$ is the so-called Bellman operator and $\mathcal{A}$ an approximation operator, which may be implemented by a Supervised Learning (SL) algorithm. Usual bounds on the asymptotic performance of AVI are established in terms of the $L\infty$-norm approximation errors induced by the SL algorithm. However, most widely used SL algorithms (such as least squares regression) return a function (the best fit) that minimizes an empirical approximation error in $L_p$-norm $(p\geq1)$. In this paper, we extend the performance bounds of AVI to weighted $L_p$-norms, which enables to directly relate the performance of AVI to the approximation power of the SL algorithm, hence assuring the tightness and pratical relevance of these bounds. The main result is a performance bound of the resulting policies expressed in terms of the $L_p$-norm errors introduced by the successive approximations. The new bound takes into account a concentration coefficient that estimates how much the discounted future-state distributions starting from a probability measure used to assess the performance of AVI can possibly differ from the distribution used in the regression operation. We illustrate the tightness of the bounds on an optimal replacement problem.
Origine | Fichiers produits par l'(les) auteur(s) |
---|