A Framework for Indexing Human Actions in Video - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2008

A Framework for Indexing Human Actions in Video

Abstract

Several researchers have addressed the problem of human action recognition using a variety of algorithms. An underlying assumption in most of these algorithms is that action boundaries are already known in a test video sequence. In this paper, we propose a fast method for continuous human action recognition in a video sequence. We propose the use of a low dimensional feature vector which consists of (a) the projections of the width profile of the actor on to a Discrete Cosine Transform (DCT) basis and (b) simple spatio-temporal features. We use an earlier proposed average-template with multiple features for modelling human actions and combine it with One-pass Dynamic Programing (DP) algorithm for continuous action recognition. This model accounts for intra-class variability in the way an action is performed. Furthermore, we demonstrate a way to perform noise robust recognition by creating a noise match condition between the train and the test data. The effectiveness of our method is demonstrated by conducting experiments on the IXMAS dataset of persons performing various actions and an outdoor Action database collected by us.
Fichier principal
Vignette du fichier
mlvma08_submission_18.pdf (283.86 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

inria-00326719 , version 1 (05-10-2008)

Identifiers

  • HAL Id : inria-00326719 , version 1

Cite

Kaustubh Kulkarni, Srikanth Cherla, Amit Kale, V. Ramasubramanian. A Framework for Indexing Human Actions in Video. The 1st International Workshop on Machine Learning for Vision-based Motion Analysis - MLVMA'08, Oct 2008, Marseille, France. ⟨inria-00326719⟩

Collections

MLVMA08
161 View
180 Download

Share

Gmail Facebook X LinkedIn More