Hierarchical Visual Perception without Calibration
Abstract
We have analyzed the equations and the formalism which allow to achieve dynamic visual perception of geometric and kinematic 3D information, for a monocular visual system without calibration. Considering the emergence of active visual systems for which we {\bf can not} consider that the calibration parameters are either known or fixed, we develop an alternative strategy based on the two complementary facts that: (i) several perceptual tasks can be performed without knowing the calibration parameters, while, for other perceptual tasks: (ii) certain class of special displacements induce enough equations to evaluate the calibration parameters, so that we can recover the Euclidean structure of the scene when needed. A synthesis of what can be recovered in terms of scene geometry and kinematics is proposed. We give, for the different levels of calibration, an exhaustive list of the geometric and kinematic information which can be recovered. Following a strategy based on special kind of displacements, such as fixed axis rotations or pure translations for instance, we also describe how to control the robotic system in order to generate these particular classes of displacement. The implementation of these equations is analyzed here, and some experimental results are reported.
Domains
Other [cs.OH]
Loading...