Video Structuring: From Pixels to Visual Entities - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2012

Video Structuring: From Pixels to Visual Entities

Résumé

In this paper we propose a complete framework for automatic detection of salient objects in video streams. The video flow is firstly segmented into shots based on scale space filtering graph partition method. For each detected shot the associated static summary is developed using a leap keyframe extraction method. Based on the representative images we introduce next a combined spatial and temporal video attention models that is able to recognize both interesting objects and actions in image sequences. The approach extends the state-of-the-art image region based contrast saliency with a temporal attention model. Different types of motion presented in the current shot are determined using a set of homographic transforms, estimated by recursively applying the RANSAN algorithm on the interest point correspondence. Finally, a decision is taken based on the combined information from both saliency maps. The experimental results validate the proposed framework and demonstrate that our approach is suitable for various types of videos and is robust to noise and low resolution
Fichier principal
Vignette du fichier
1569588953.pdf (727.77 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00735698 , version 1 (26-09-2012)

Identifiants

  • HAL Id : hal-00735698 , version 1

Citer

Ruxandra Tapu, Zaharia Titus. Video Structuring: From Pixels to Visual Entities. 20th European Signal Processing Conference (EUSIPCO-2012), Aug 2012, Bucarest, Romania. pp.1583-1587. ⟨hal-00735698⟩
81 Consultations
61 Téléchargements

Partager

Gmail Facebook X LinkedIn More