Cooperative Deep Reinforcement Learning for Dynamic Pollution Plume Monitoring using a Drone Fleet
Résumé
Monitoring pollution plumes is a key issue, given the harmful effects they cause. The dynamic of these plumes, which may be important due to meteorological conditions, makes their study difficult. Real-time monitoring in order to obtain an accurate mapping of the pollution dispersion is helpful and valuable to mitigate risks. In this work, we consider a fleet of cooperative drones carrying pollution sensors and operating in order to assess a pollution plume. The latter is assumed to follow a Gaussian Process (GP) with varying parameters. For this use case, we propose an efficient approach to characterize spatially and temporarily the plume while optimizing the path planning of drones. In our approach, drones are guided by a Deep Reinforcement Learning (DRL) model called Categorical Deep Q-Network (Categorical DQN) to maximize the plume coverage while considering budget constraints. Specifically, we develop a scalable Independent Q-Learning (IQL) scheme that shares team rewards based on each drone's deployment relevance and therefore ensures cooperation. We evaluate the performance of the plume parameter estimation as well as the maps generated by the GP regression. By testing our framework on several plume scenarios, we show that it offers good results in terms of both estimation quality and run-time efficiency.
Fichier principal
Cooperative_Deep_Reinforcement_Learning_for_Dynamic_Pollution_Plume_Monitoring_using_a_Drone_Fleet_IEEE_Internet_of_Things_Journal.pdf (1.47 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|