Exploiting separability in multiagent planning with continuous-state mdps (extended abstract) - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2015

Exploiting separability in multiagent planning with continuous-state mdps (extended abstract)

Résumé

Decentralized partially observable Markov deci- sion processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in co- operative decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we recently intro- duced a method for transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This new Dec-POMDP formulation, which we call an occupancy MDP, allows powerful POMDP and continuous-state MDP methods to be used for the first time. However, scalability remains limited when the number of agents or problem variables becomes large. In this paper, we show that, un- der certain separability conditions of the optimal value function, the scalability of this approach can increase considerably. This separability is present when there is locality of interaction be- tween agents, which can be exploited to improve performance. Unlike most previous methods, the novel continuous-state MDP algorithm retains op- timality and convergence guarantees. Results show that the extension using separability can scale to a large number of agents and domain variables while maintaining optimality.
Fichier non déposé

Dates et versions

hal-01188483 , version 1 (31-08-2015)

Identifiants

  • HAL Id : hal-01188483 , version 1

Citer

Jilles Steeve Dibangoye, Christopher Amato, Olivier Buffet, François Charpillet. Exploiting separability in multiagent planning with continuous-state mdps (extended abstract). IJCAI 2015 - 24th International Joint Conference on Artificial Intelligence, Jul 2015, Buenos Aires, Argentina. ⟨hal-01188483⟩
224 Consultations
0 Téléchargements

Partager

More