Fusion of Multi-View Silhouette Cues Using a Space Occupancy Grid
Résumé
In this paper, we investigate what can be inferred from several silhouette probability maps, in multi-camera environments. To this aim, we propose a new framework for multi-view silhouette cue fusion. This framework uses a space occupancy grid as a probabilistic 3D representation of scene contents. Such a representation is of great interest for various computer vision applications in perception, or localization for instance. Our main contribution is to introduce the occupancy grid concept, popular in the robotics community, for multi-camera environments. The idea is to consider each camera pixel as a statistical occupancy sensor. All pixel observations are then used jointly to infer where, and how likely, matter is present in the scene. As our results illustrate, this simple model has various advantages. Most sources of uncertainty are explicitly modeled, and no premature decisions about pixel labeling occur, thus preserving pixel knowledge. Consequently, optimal scene object localization, and robust volume reconstruction, can be achieved, with no constraint on camera placement and object visibility. In addition, this representation allows to improve silhouette extraction in images.
Fichier principal
franco_boyer_iccv05.pdf (943.1 Ko)
Télécharger le fichier
iccv05.jpg (14.4 Ko)
Télécharger le fichier
SilhouetteCueFusion-iccv05.avi (5.02 Mo)
Télécharger le fichier
iccv05.png (206.61 Ko)
Télécharger le fichier
inria-00349083.png (19.64 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Format | Figure, Image |
---|---|
Origine | Fichiers produits par l'(les) auteur(s) |
Format | Autre |
---|
Format | Figure, Image |
---|---|
Origine | Fichiers produits par l'(les) auteur(s) |
Format | Figure, Image |
---|
Loading...