Detecting Keypoints with Stable Position, Orientation and Scale under Illumination Changes
Résumé
Local feature approaches to vision geometry and object recognition are based on selecting and matching sparse sets of visually salient image points, known as lsquokeypointsrsquo or lsquopoints of interestrsquo. Their performance depends critically on the accuracy and reliability with which corresponding keypoints can be found in subsequent images. Among the many existing keypoint selection criteria, the popular Förstner-Harris approach explicitly targets geometric stability, defining keypoints to be points that have locally maximal self-matching precision under translational least squares template matching. However, many applications require stability in orientation and scale as well as in position. Detecting translational keypoints and verifying orientation/scale behaviour post hoc is suboptimal, and can be misleading when different motion variables interact. We give a more principled formulation, based on extending the Förstner-Harris approach to general motion models and robust template matching. We also incorporate a simple local appearance model to ensure good resistance to the most common illumination variations. We illustrate the resulting methods and quantify their performance on test images.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...