Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Pattern Analysis and Machine Intelligence Année : 2018

Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion

Résumé

Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.
Fichier principal
Vignette du fichier
Gebru-TPAMI2017-final.pdf (6.9 Mo) Télécharger le fichier
Vignette du fichier
seq32-4P_img-000300.png (131.11 Ko) Télécharger le fichier
Vignette du fichier
seq32-4P_img-000300.jpg (13.46 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Format Figure, Image
Origine Fichiers produits par l'(les) auteur(s)
Format Figure, Image
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01413403 , version 1 (03-01-2017)

Identifiants

Citer

Israel Gebru, Sileye Ba, Xiaofei Li, Radu Horaud. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40 (5), pp.1086 - 1099. ⟨10.1109/TPAMI.2017.2648793⟩. ⟨hal-01413403⟩
860 Consultations
393 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More