Reoptimization Nearly Solves Weakly Coupled Markov Decision Processes
Résumé
We propose a new policy, called the LP-update policy, to solve finite horizon weakly-coupled Markov decision processes. The latter can be seen as multi-constraint multi-action bandits, and generalize the classical restless bandit problems. Our solution is based on re-solving periodically a relaxed version of the original problem, that can be cast as a linear program (LP). When the problem is made of $N$ statistically identical sub-components, we show that the LP-update policy becomes asymptotically optimal at rate $O(T^2/\sqrt{N})$. This rate can be improved to $O(T/\sqrt{N})$ if the problem satisfies some ergodicity property and to $O(1/N)$ if the problem is non-degenerate. The definition of non-degeneracy extends the same notion for restless bandits. By using this property, we also improve the computational efficiency of the LP-update policy. We illustrate the performance of our policy on randomly generated examples, as well as a generalized applicant screening problem, and show that it outperforms existing heuristics.
Origine | Fichiers produits par l'(les) auteur(s) |
---|