Collaborative Visual SLAM Framework for a Multi-Robot System
Résumé
This paper presents a framework for collabora-tive visual SLAM using monocular cameras for a team of mobile robots. The robots perform SLAM individually using their on-board processors thereby estimating the seven degrees of freedom (including scale) for the motion of the camera and creating a map of the environment as a pose-graph of keyframes. Each robot communicates to a central server by sending local keyframe information. The central server merges them when a visual overlap is detected in the scene and creates a global map. In the background, the global map is continuously optimized using bundle adjustment techniques and the updated pose information is communicated back as feedback to the individual robots. We present some preliminary experimental results towards testing the framework with two mobile robots in an indoor environment.
Domaines
Automatique / RobotiqueOrigine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...