Blind Audiovisual Source Separation Based on Redundant Representations
Résumé
In this work we present a method to perform a complete audiovisual source separation without need of previous information. This method is based on the assumption that sounds are caused by moving structures. Thus, an efficient representation of audio and video sequences allows to build relationships between synchronous structures on both modalities. A robust clustering algorithm groups video structures exhibiting strong correlations with the audio so that sources are counted and located in the image. Using such information and exploiting audio-video correlation, the audio sources activity is determined. Next, \backslashemph\char123spectral\char125 GMMs are learnt in time slots with only one source active so that it is possible to separate them in case of an audio mixture. Audio source separation performances are rigorously evaluated, clearly showing that the proposed algorithm performs efficiently and robustly.
Mots clés
audio sequences
audio-video correlation
blind audiovisual source separation
robust clustering algorithm
spectral Gaussian mixture models
synchronous structures
video sequences
Gaussian processes
audio signal processing
blind source separation
correlation methods
image sequences
signal representation
video signal processing
Fichier principal
2008_ICASSP_LlagosterasEtAl_BAVSS-authorversion.pdf (621.13 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...