Using “Social actions” and RL-algorithms to build policies in DEC-POMDP
Abstract
Building individual behaviors to solve collective problems is a major stake whose applications are found in several domains. To do so, Dec-POMDP has been proposed as a formalism for describing multi-agent problems. However, solving a Dec-POMDP turned out to be a NEXP problem. In this study, we introduced the original concept of social action to get round the inherent complexity of Dec-POMDP and we proposed three decentralized reinforcement learning algorithms which approximate the optimal policy in Dec-POMDP. This article analyses the results obtained and argues that this new approach seems promising for automatic top-down collective behavior computation.