A POMDP Extension with Belief-dependent Rewards - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2010

A POMDP Extension with Belief-dependent Rewards

Mauricio Araya-López
  • Function : Author
  • PersonId : 881106
Olivier Buffet
Vincent Thomas

Abstract

Partially Observable Markov Decision Processes (POMDPs) model sequential decision-making problems under uncertainty and partial observability. Unfortunately, some problems cannot be modeled with state-dependent reward functions, e.g., problems whose objective explicitly implies reducing the uncertainty on the state. To that end, we introduce ρPOMDPs, an extension of POMDPs where the reward function ρ depends on the belief state. We show that, under the common assumption that ρ is convex, the value function is also convex, what makes it possible to (1) approximate ρ arbitrarily well with a piecewise linear and convex (PWLC) function, and (2) use state-of-the-art exact or approximate solving algorithms with limited changes.
Fichier principal
Vignette du fichier
article.pdf (259.11 Ko) Télécharger le fichier
Origin : Publisher files allowed on an open archive
Loading...

Dates and versions

inria-00535560 , version 1 (11-12-2010)
inria-00535560 , version 2 (14-12-2010)

Identifiers

  • HAL Id : inria-00535560 , version 2

Cite

Mauricio Araya-López, Olivier Buffet, Vincent Thomas, François Charpillet. A POMDP Extension with Belief-dependent Rewards. Neural Information Processing Systems - NIPS 2010, Dec 2010, Vancouver, Canada. ⟨inria-00535560v2⟩
440 View
550 Download

Share

Gmail Facebook X LinkedIn More