On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes
Abstract
We consider infinite-horizon discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. We consider the algorithm Value Iteration and the sequence of policies $\pi_1,\dots,\pi_k$ it gen erates until some iteration $k$. We provide performance bounds for non-stationary policies involving the last $m$ generated policies that reduce the state-of-the-art bound for the last stationary policy $\pi_k$ by a factor $\frac{1-\gamma}{1-\gamma^m}$. In other words, and contrary to a common intuition, we show that it may be much easier to find a non-stationary approximately-optimal policy than a stationary one.
Domains
Artificial Intelligence [cs.AI]Origin | Files produced by the author(s) |
---|