Using Geometric Constraints Through Parallelepipeds for Calibration and 3D Modelling
Résumé
This paper concerns incorporation of geometric information in the camera calibration and 3D modeling. Using the geometric constraints enables stabler results and allows to perform tasks with fewer images. Our approach is motivated and developed within a framework of semi-automatic 3D modeling, where the user defines geometric primitives and constraints between them. It is based on the observation that constraints such as coplanarity, parallelism or orthogonality, are often embedded intuitively in parallelepipeds. Moreover, parallelepipeds are easy to delineate by a user, and are well adapted to model the main structure of e.g. architectural scenes. In this paper, first a duality that exists between the shape of a parallelepiped and the intrinsic parameters of a camera is described. Then, a factorization-based algorithm exploiting this relation is developed. Using the images of the parallelepipeds, it allows to simultaneously calibrate cameras, to recover shapes of parallelepipeds and to estimate the relative pose of all entities. Dealing with a well constrained three-dimensional structure makes it possible to overcome the common problems of the factorization methods: missing data and unknown scale factors. The reconstruction obtained this way is of the affine character. To remove the affine ambiguity, all the available metric information: constraints on parallelepipeds' edge lengths and angles, as well as the usual self-calibration constraints on cameras can be used simultaneously. The proposed algorithm is completed by a study of the singular cases of the calibration method. Also a method for the reconstruction of scene primitives that are not modeled by parallelepipeds is introduced. The method is validated by various experimental results for real and simulated scenes, for cases where a single or several views are available.