Exploiting separability in multiagent planning with continuous-state mdps (extended abstract)
Résumé
Decentralized partially observable Markov deci- sion processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in co- operative decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we recently intro- duced a method for transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This new Dec-POMDP formulation, which we call an occupancy MDP, allows powerful POMDP and continuous-state MDP methods to be used for the first time. However, scalability remains limited when the number of agents or problem variables becomes large. In this paper, we show that, un- der certain separability conditions of the optimal value function, the scalability of this approach can increase considerably. This separability is present when there is locality of interaction be- tween agents, which can be exploited to improve performance. Unlike most previous methods, the novel continuous-state MDP algorithm retains op- timality and convergence guarantees. Results show that the extension using separability can scale to a large number of agents and domain variables while maintaining optimality.