Deep metric learning for visual servoing: when pose and image meet in latent space - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year :

Deep metric learning for visual servoing: when pose and image meet in latent space

Abstract

We propose a new visual servoing method that controls a robot's motion in a latent space. We aim to extract the best properties of two previously proposed servoing methods: we seek to obtain the accuracy of photometric methods such as Direct Visual Servoing (DVS), as well as the behavior and convergence of pose-based visual servoing (PBVS). Photometric methods suffer from limited convergence area due to a highly non-linear cost function, while PBVS requires estimating the pose of the camera which may introduce some noise and incurs a loss of accuracy. Our approach relies on shaping (with metric learning) a latent space, in which the representations of camera poses and the embeddings of their respective images are tied together. By leveraging the multimodal aspect of this shared space, our control law minimizes the difference between latent image representations thanks to information obtained from a set of pose embeddings. Experiments in simulation and on a robot validate the strength of our approach, showing that the sought out benefits are effectively found.

Domains

Automatic
Fichier principal
Vignette du fichier
ICRA23_1860_FI.pdf (1.04 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04003126 , version 1 (23-02-2023)

Identifiers

  • HAL Id : hal-04003126 , version 1

Cite

Samuel Felton, Élisa Fromont, Eric Marchand. Deep metric learning for visual servoing: when pose and image meet in latent space. ICRA 2023 - IEEE International Conference on Robotics and Automation, May 2023, London, United Kingdom. pp.1-7. ⟨hal-04003126⟩
30 View
60 Download

Share

Gmail Facebook Twitter LinkedIn More