AUDIO-VISUAL SPEECH ENHANCEMENT WITH A DEEP KALMAN FILTER GENERATIVE MODEL - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Preprints, Working Papers, ... (Preprint) Year :

AUDIO-VISUAL SPEECH ENHANCEMENT WITH A DEEP KALMAN FILTER GENERATIVE MODEL

Abstract

Deep latent variable generative models based on variational autoencoder (VAE) have shown promising performance for audiovisual speech enhancement (AVSE). The underlying idea is to learn a VAEbased audiovisual prior distribution for clean speech data, and then combine it with a statistical noise model to recover a speech signal from a noisy audio recording and video (lip images) of the target speaker. Existing generative models developed for AVSE do not take into account the sequential nature of speech data, which prevents them from fully incorporating the power of visual data. In this paper, we present an audiovisual deep Kalman filter (AV-DKF) generative model which assumes a first-order Markov chain model for the latent variables and effectively fuses audiovisual data. Moreover, we develop an efficient inference methodology to estimate speech signals at test time. We conduct a set of experiments to compare different variants of generative models for speech enhancement. The results demonstrate the superiority of the AV-DKF model compared with both its audio-only version and the non-sequential audio-only and audiovisual VAE-based models.
Fichier principal
Vignette du fichier
main.pdf (698.72 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03833814 , version 1 (28-10-2022)

Identifiers

Cite

Ali Golmakani, Mostafa Sadeghi, Romain Serizel. AUDIO-VISUAL SPEECH ENHANCEMENT WITH A DEEP KALMAN FILTER GENERATIVE MODEL. 2022. ⟨hal-03833814⟩
53 View
51 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More