Meta-Level Control Under Uncertainty for Handling Multiple Consumable Resources of Robots
Résumé
Most of works on planning under uncertainty in AI assumes rather simple action models, which do not consider multiple resources. This assumption is not reasonable for many applications such as planetary rovers or robotics which much cope with uncertainty about the duration of tasks, the energy, and the data storage. In this paper, we outline an approach to control the operation of an autonomous rover which operates under multiple resource constraints. We consider a directed acyclic graph of progressive processing tasks with multiple resources, for which an optimal policy is obtained by solving a corresponding Markov Decision Process (MDP). Computing an optimal policy for an MDP with multiple resources makes the search space large. We cannot calculate this optimal policy at run-time. The approach developed in this paper overcomes this difficulty by combining: decomposition of a large MDP into smaller ones, compression of the state space by exploiting characteristics of the multiple resources constraint, construction of local policies for the decomposed MDPs using state space discretization and resource compression, and recomposition of the local policies to obtain a near optimal global policy. Finally, we present first experimental results showing the feasibility and performances of our approach.
Loading...