Action Recognition using Exemplar-based Embedding - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2008

Action Recognition using Exemplar-based Embedding

Résumé

In this paper, we address the problem of representing human actions using visual cues for the purpose of learning and recognition. Traditional approaches model actions as space-time representations which explicitly or implicitly encode the dynamics of an action through temporal dependencies. In contrast, we propose a new compact and efficient representation which does not account for such dependencies. Instead, motion sequences are represented with respect to a set of discriminative static key-pose exemplars and without modeling any temporal ordering. The interest is a time-invariant representation that drastically simplifies learning and recognition by removing time related information such as speed or length of an action. The proposed representation is equivalent to embedding actions into a space defined by distances to key-pose exemplars. We show how to build such embedding spaces of low dimension by identifying a vocabulary of highly discriminative exemplars using a forward selection. To test our representation, we have used a publicly available dataset which demonstrates that our method can precisely recognize actions, even with cluttered and non-segmented sequences.
Fichier principal
Vignette du fichier
weinland08.pdf (565.85 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00590256 , version 1 (03-05-2011)

Identifiants

Citer

Daniel Weinland, Edmond Boyer. Action Recognition using Exemplar-based Embedding. CVPR 2008 - IEEE Conference on Computer Vision and Pattern Recognition, Jun 2008, Anchorage, United States. pp.1-7, ⟨10.1109/CVPR.2008.4587731⟩. ⟨inria-00590256⟩
284 Consultations
400 Téléchargements

Altmetric

Partager

More