Audio Event Detection in Movies using Multiple Audio Words and Contextual Bayesian Networks - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2013

Audio Event Detection in Movies using Multiple Audio Words and Contextual Bayesian Networks

Abstract

This article investigates a novel use of the well known audio words representations to detect specific audio events, namely gunshots and explosions, in order to get more robustness towards soundtrack variability in Hollywood movies. An audio stream is processed as a sequence of stationary segments. Each segment is described by one or several audio words obtained by applying product quantization to standard features. Such a representation using multiple audio words constructed via product quantisation is one of the novelties described in this work. Based on this representation, Bayesian networks are used to exploit the contextual information in order to detect audio events. Experiments are performed on a comprehensive set of 15 movies, made publicly available. Results are comparable to the state of the art results obtained on the same dataset but show increased robustness to decision thresholds, however limiting the range of possible operating points in some conditions. Late fusion provides a solution to this issue.
Fichier principal
Vignette du fichier
CBMI2013_CedricPENET_CameraReady.pdf (271.7 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-00822022 , version 1 (13-05-2013)

Identifiers

  • HAL Id : hal-00822022 , version 1

Cite

Cédric Penet, Claire-Hélène Demarty, Guillaume Gravier, Patrick Gros. Audio Event Detection in Movies using Multiple Audio Words and Contextual Bayesian Networks. CBMI - 11th International Workshop on Content Based Multimedia Indexing - 2013, Jun 2013, Veszprém, Hungary. ⟨hal-00822022⟩
308 View
420 Download

Share

Gmail Facebook X LinkedIn More