Actlets: A novel local representation for human action recognition in video - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2012

Actlets: A novel local representation for human action recognition in video

Abstract

This paper addresses the problem of human action recognition in realistic videos. We follow the recently successful local approaches and represent videos by means of local motion descriptors. To overcome the huge variability of human actions in motion and appearance, we propose a supervised approach to learn local motion descriptors - actlets - from a large pool of annotated video data. The main motivation behind our method is to construct action-characteristic representations of body joints undergoing specific motion patterns while learning invariance with respect to changes in camera views, lighting, human clothing, and other factors. We avoid the prohibitive cost of manual supervision and show how to learn actlets automatically from synthetic videos of avatars driven by the motion-capture data. We evaluate our method and show its significant improvement as well as its complementarity to existing techniques on the challenging UCF-sports and YouTube-actions datasets.
Fichier principal
Vignette du fichier
ullah_icip12.pdf (1.19 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01063332 , version 1 (11-09-2014)

Identifiers

Cite

Muhammad M. Ullah, Ivan Laptev. Actlets: A novel local representation for human action recognition in video. ICIP 2012 - International Conference on Image Processing, Sep 2012, Orlando, Florida, United States. pp.777 - 780, ⟨10.1109/ICIP.2012.6466975⟩. ⟨hal-01063332⟩
278 View
608 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More