A Framework for Coupling Visual Control and Active Structure from Motion - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

A Framework for Coupling Visual Control and Active Structure from Motion

Résumé

In most sensor-based robotic applications, the robot state can only be partially retrieved from onboard sensors and the use of estimation strategies is necessary for recovering online an approximation of any 'missing information' required to accurately control the robot action. With the exception of some trivial cases, however, the relationship between the sensor readings and the robot state is often nonlinear. As a consequence, and regardless of the particular estimation scheme, performance of the state estimation (e.g., convergence rate and/or final accuracy) depends, in general, on the particular trajectory followed by the sensor during the estimation process, with some trajectories being more informative than other ones. The perspective projection performed by cameras is a classical example of such nonlinear sensor/state mapping. As well-known, a monocular camera cannot, e.g., estimate the depth of a point feature by traveling along the feature projection ray, and other constraints exist for different geometric primitives. This clearly creates a strong link between the motion performed by the robot/camera and the performance of any 3D structure estimation algorithm. Similarly, a poor accuracy in estimating the scene structure can also affect the performance of visual control schemes resulting in poor or even unstable closed-loop behaviors. Indeed, it has been shown in, e.g., [1] that a poor approximation of the 3D parameters of the scene can significantly affect stability of Image Based Visual Servoing (IBVS) controllers. With respect to these considerations, in this contribution (which briefly summarizes [2]) we propose an online coupling between action and perception in the context of robot visual control by considering, in particular, the class of Image-Based Visual Servoing (IBVS) schemes [3] as representative case study. Indeed, besides being a widespread sensor-based technique, IBVS is also affected by all the aforementioned issues: on the one hand, whatever the chosen set of visual features (e.g., points, lines, planar patches), the associated interaction matrix always depends on some additional 3-D information not directly measurable from the visual input (e.g., the depth of a feature point). via a Structure from Motion (SfM) algorithm, and an inaccurate knowledge (because of, e.g., wrong approximations or poor SfM performance) can degrade the servoing execution and also lead to instabilities or loss of feature tracking. On the other hand, the SfM performance is directly affected by the particular trajectory followed by the camera during the servoing [4]–[6]: the IBVS controller should then be able to realize the main visual task while, at the same time, ensuring a sufficient level of information gain for allowing an accurate state estimation. In order to meet these objectives, we investigate a possible coupling between a recently developed framework for active SfM [5], [6] (the active perception component of our approach) and the execution of a standard IBVS task (the visual control component of our approach). The main idea is to project any optimization of the camera motion (aimed at improving the SfM performance) within the null-space of the considered task in order to not degrade the servoing execution. However, for any reasonable IBVS application, a simple null-space projection of a camera trajectory optimization turns out to be ineffective because of a structural lack of redundancy. Therefore, in order to gain the needed freedom for implementing the SfM optimization, we suitably exploit and extend the redundancy framework introduced in [7] which grants a large projection operator by considering the norm of the visual error as main task. In addition, we also propose an adaptive mechanism able to activate/deactivate online the camera trajectory optimization as a function of the accuracy of the estimated 3-D structure. Thanks to this addition, it is then possible to enable the SfM optimization only when strictly needed such as, e.g., when the 3-D estimation error grows larger than some desired minimum threshold. In the following sections we give additional details on the active estimation (Sect. II) and redundancy resolution frameworks (Sect. III), followed by some experimental results (Sect. IV) of the proposed approach.
Fichier principal
Vignette du fichier
2015_wkicra_spica.pdf (404.42 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01332999 , version 1 (16-06-2016)

Identifiants

  • HAL Id : hal-01332999 , version 1

Citer

Riccardo Spica, Paolo Robuffo Giordano, François Chaumette. A Framework for Coupling Visual Control and Active Structure from Motion. IEEE Int. Conf. on Robotics and Automation Workshop on Scaling Up Active Perception, May 2015, Seattle, United States. ⟨hal-01332999⟩
608 Consultations
110 Téléchargements

Partager

Gmail Facebook X LinkedIn More