ρ-POMDPs have Lipschitz-Continuous ϵ-Optimal Value Functions - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2018

ρ-POMDPs have Lipschitz-Continuous ϵ-Optimal Value Functions

Résumé

Many state-of-the-art algorithms for solving Partially Observable Markov Decision Processes (POMDPs) rely on turning the problem into a "fully observable" problem---a belief MDP---and exploiting the piece-wise linearity and convexity (PWLC) of the optimal value function in this new state space (the belief simplex ∆). This approach has been extended to solving ρ-POMDPs---i.e., for information-oriented criteria-when the reward ρ is convex in ∆. General ρ-POMDPs can also be turned into "fully observable" problems, but with no means to exploit the PWLC property. In this paper, we focus on POMDPs and ρ-POMDPs with λ ρ-Lipschitz reward function, and demonstrate that, for finite horizons, the optimal value function is Lipschitz-continuous. Then, value function approximators are proposed for both upper-and lower-bounding the optimal value function, which are shown to provide uniformly improvable bounds. This allows proposing two algorithms derived from HSVI which are empirically evaluated on various benchmark problems.
Fichier principal
Vignette du fichier
nips18-ext.pdf (382.96 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01903685 , version 1 (24-10-2018)
hal-01903685 , version 2 (09-01-2019)

Identifiants

  • HAL Id : hal-01903685 , version 1

Citer

Mathieu Fehr, Olivier Buffet, Vincent Thomas, Jilles Dibangoye. ρ-POMDPs have Lipschitz-Continuous ϵ-Optimal Value Functions. NIPS 2018 - Thirty-second Conference on Neural Information Processing Systems, Dec 2018, Montréal, Canada. ⟨hal-01903685v1⟩
483 Consultations
384 Téléchargements

Partager

More