Multi-microphone speech recognition in everyday environments - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Computer Speech and Language Année : 2017

Multi-microphone speech recognition in everyday environments

Jon Barker
  • Fonction : Auteur
  • PersonId : 895549
Ricard Marxer
Shinji Watanabe
  • Fonction : Auteur
  • PersonId : 971336

Résumé

Multi-microphone signal processing techniques have the potential to greatly improve the robustness of speech recognition (ASR) in distant microphone settings. However, in everyday environments, typified by complex non-stationary noise backgrounds, designing effective multi-microphone speech recognition systems is non trivial. In particular, optimal performance requires the tight integration of the front-end signal processing and the back-end statistical speech and noise source modelling. The best way to achieve this in a modern deep learning speech recognition framework remains unclear. Further, variability in microphone array design --- and consequent lack of real training data for any particular configuration --- may mean that systems have to be able to generalise from audio captured using mismatched microphone geometries or produced using simulation.
Fichier principal
Vignette du fichier
vincent_CSL17.pdf (67.03 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01483469 , version 1 (05-03-2017)

Identifiants

Citer

Jon Barker, Ricard Marxer, Emmanuel Vincent, Shinji Watanabe. Multi-microphone speech recognition in everyday environments. Computer Speech and Language, 2017, 46, pp.386-387. ⟨10.1016/j.csl.2017.02.007⟩. ⟨hal-01483469⟩
895 Consultations
438 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More