3D-Aware Ellipse Prediction for Object-Based Camera Pose Estimation
Résumé
In this paper, we propose a method for coarse camera pose computation which is robust to viewing conditions and does not require a detailed model of the scene. This method meets the growing need of easy deployment of robotics or augmented reality applications in any environments, especially those for which no accurate 3D model nor huge amount of ground truth data are available. It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions. Previous works have also shown that abstracting the geometry of a scene of objects by an ellipsoid cloud allows to compute the camera pose accurately enough for various application needs. Though promising, these approaches use the ellipses fitted to the detection bounding boxes as an approximation of the im-aged objects. In this paper, we go one step further and propose a learning-based method which detects improved elliptic approximations of objects which are coherent with the 3D ellipsoid in terms of perspective projection. Experiments prove that the accuracy of the computed pose significantly increases thanks to our method and is more robust to the variability of the boundaries of the detection boxes. This is achieved with very little effort in terms of training data acquisition-a few hundred calibrated images of which only three need manual object annotation.
Fichier principal
3D-Aware_Ellipse_Prediction_for_Object-Based_Camera_Pose_Estimation_IEEE_certified.pdf (10.28 Mo)
Télécharger le fichier
Supplementary_material.pdf (26.46 Mo)
Télécharger le fichier
video_localization_1.mp4 (8.09 Mo)
Télécharger le fichier
video_localization_2.mp4 (27.94 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...