Collaborative Visual SLAM Framework for a Multi-Robot System - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2015

Collaborative Visual SLAM Framework for a Multi-Robot System

Abstract

This paper presents a framework for collabora-tive visual SLAM using monocular cameras for a team of mobile robots. The robots perform SLAM individually using their on-board processors thereby estimating the seven degrees of freedom (including scale) for the motion of the camera and creating a map of the environment as a pose-graph of keyframes. Each robot communicates to a central server by sending local keyframe information. The central server merges them when a visual overlap is detected in the scene and creates a global map. In the background, the global map is continuously optimized using bundle adjustment techniques and the updated pose information is communicated back as feedback to the individual robots. We present some preliminary experimental results towards testing the framework with two mobile robots in an indoor environment.

Domains

Automatic
Fichier principal
Vignette du fichier
PPNIV15Nived.pdf (204.94 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-02459361 , version 1 (29-01-2020)

Identifiers

  • HAL Id : hal-02459361 , version 1

Cite

Nived Chebrolu, David Marquez-Gamez, Philippe Martinet. Collaborative Visual SLAM Framework for a Multi-Robot System. 7th Workshop on Planning, Perception and Navigation for Intelligent Vehicles, Sep 2015, Hamburg, Germany. ⟨hal-02459361⟩
239 View
233 Download

Share

Gmail Mastodon Facebook X LinkedIn More