Learning to Act in Decentralized Partially Observable MDPs
Apprendre à agir dans un Dec-POMDP
Résumé
We address a long-standing open problem of reinforcement learning in decentralized partially
observable Markov decision processes. Previous attempts focussed on different forms of generalized policy
iteration, which at best led to local optima. In this paper, we restrict attention to plans, which are simpler
to store and update than policies. We derive, under certain conditions, the first near-optimal cooperative
multi-agent reinforcement learning algorithm. To achieve significant scalability gains, we replace the greedy
maximization by mixed-integer linear programming. Experiments show our approach can learn to act
near-optimally in many finite domains from the literature.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...