Exploiting Separability in Multiagent Planning with Continuous-State MDPs - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2014

Exploiting Separability in Multiagent Planning with Continuous-State MDPs


Recent years have seen significant advances in techniques for optimally solving multiagent problems represented as decentralized partially observable Markov decision processes (Dec-POMDPs). A new method achieves scalability gains by converting Dec-POMDPs into continuous state MDPs. This method relies on the assumption of a centralized planning phase that generates a set of decentralized policies for the agents to execute. However, scalability remains limited when the number of agents or problem variables becomes large. In this paper, we show that, under certain separability conditions of the optimal value function, the scalability of this approach can increase considerably. This separability is present when there is locality of interaction, which — as other approaches (such as those based on the ND-POMDP subclass) have already shown — can be exploited to improve performance. Unlike most previous methods, the novel continuous-state MDP algorithm retains optimality and convergence guarantees. Results show that the extension using separability can scale to a large number of agents and domain variables while maintaining optimality.
Fichier principal
Vignette du fichier
aamas14a.pdf (302.42 Ko) Télécharger le fichier
Origin : Publisher files allowed on an open archive

Dates and versions

hal-01092066 , version 1 (10-12-2014)


  • HAL Id : hal-01092066 , version 1


Jilles Dibangoye, Christopher Amato, Olivier Buffet, François Charpillet. Exploiting Separability in Multiagent Planning with Continuous-State MDPs. AAMAS 2014 - 13th International Conference on Autonomous Agents and Multiagent Systems, May 2014, Paris, France. ⟨hal-01092066⟩
267 View
212 Download


Gmail Facebook Twitter LinkedIn More