Ontologies For Video Events - Inria - Institut national de recherche en sciences et technologies du numérique
Rapport Année : 2004

Ontologies For Video Events

Nicolas Maillot
  • Fonction : Auteur
Monique Thonnat
  • Fonction : Auteur
Van-Thinh Vu
  • Fonction : Auteur

Résumé

This report shows how we represent video event knowledge for Automatic Video Interpretation. To solve this issue, we first build an ontology structure to design concepts rela tive to video events. There are two main types of concepts to be represented: physical objects of the observed scene and video events occurring in the scene. A physical object can be a static object (e.g. a desk, a machine) or a mobile object detected by a vision routine (e.g. a person, a car). A video event can be a primitive state, composite state, primitive event or composite event. Primitive states are atoms to build other concepts of the knowledge base of an Auto matic Video Interpretation System. A composed concept (i.e. composite state and event) is represented by a combination of its sub-concepts and possibly a set of events that are not al lowed occurring during the recognition of this concept. We use non-temporal constraints (logi cal, spatial constraint) to specify the physical objects involved in a concept and also temporal constraints including Allen's interval algebra operators to describe relations (e.g. temporal order, duration) between the sub-concepts defined within a composed concept. Secondly, we validate the proposed video event ontology structure by building two ontologies (for Visual Bank and Metro Monitoring) using ORION's Scenario Description Language.
Fichier principal
Vignette du fichier
RR-5189.pdf (136.79 Ko) Télécharger le fichier

Dates et versions

inria-00071397 , version 1 (23-05-2006)

Identifiants

  • HAL Id : inria-00071397 , version 1

Citer

François Bremond, Nicolas Maillot, Monique Thonnat, Van-Thinh Vu. Ontologies For Video Events. RR-5189, INRIA. 2004. ⟨inria-00071397⟩
225 Consultations
218 Téléchargements

Partager

More