Informative and Communication-Efficient Multi-Agent Path Planning for Pollution Plume Monitoring
Résumé
In this paper, we propose an efficient framework for monitoring pollution plumes using sensor-equipped drones. Our approach leverages the power of Reinforcement Learning and Mutual Information to strategically plan drone paths in order to maximize the informativeness of the data collected while minimizing communication costs. We propose a multi-agent Independent Q-Learning scheme, where drones act independently but share a global team reward. The reward is calculated based on both the reduction in plume estimation uncertainty and the communication costs. The proposed framework is adaptable to various problem instances, making it suitable for monitoring diverse physical phenomena. We conduct extensive simulations showing the effectiveness of our approach in achieving high-quality plume monitoring, with an error in variance estimation ranging from 3% to 5% when compared with ground-truth value. Results also show that our solution offers good compromise between plume estimation and communication costs. This framework outperforms the random-walk approach up to 32.88% and genetic-based heuristics up to 4.2% in terms of total rewards under the proposed scenarios. The proposed framework is advantageous because it excels not only in providing a good solution but also in inferring it in a reasonable time especially compared to a solution provided by genetic-based heuristics.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |