Convergence domain of image-based visual servoing with a line-scan camera
Abstract
Visual servoing (see [1] for an introduction to basic approaches) consists in controlling the motion
of a robot by using computer vision data. Visual servoing schemes aim to minimize an error defined
between a vector s of visual features derived from image measurements, and the vector s
∗ of the
desired values of the features (which correspond to the reference position). A first classical visual
servoing scheme is the image-based visual servoing (IBVS), that employs for s a set of features that
are directly available in the image data. Another is position-based visual servoing, where s is a set
of robot position parameters that have to be estimated from image data.
The classical IBVS approach is considered in the sequel. It consists in using the image coordinates
of a set of points to define the feature vector s. They are compared to their coordinates in a reference
image taken at the desired camera position to control the robot motion. Stability and convergence
of IBVS has been studied but remains challenging [2]. Visual servoing will be done in the so-called
eye-in-hand configuration, in which the camera is mounted on the robot.
An holonomic 3 degrees-of-freedom robot is considered. Its configuration is given by its coordinates
(x, y) in the plane and its heading θ. The robot is equipped with a line-scan camera (a camera
that captures a single row of pixels, i.e an image line). For the sake of simplicity, the camera and
the robot pose are assumed to be the same.
This work aims to compute the set of camera poses from which IBVS will converge to the reference
pose (that corresponds to the reference image). Since classical IBVS is done by matching feature
points between the current image and the reference image, we also need to check that the feature
points always stay in the camera field of view.
Domains
Robotics [cs.RO]Origin | Files produced by the author(s) |
---|