Achieving Multi-Accent ASR via Unsupervised Acoustic Model Adaptation - Inria - Institut national de recherche en sciences et technologies du numérique
Conference Papers Year : 2020

Achieving Multi-Accent ASR via Unsupervised Acoustic Model Adaptation

Abstract

Current automatic speech recognition (ASR) systems trained on native speech often perform poorly when applied to non-native or accented speech. In this work, we propose to compute x-vector-like accent embeddings and use them as auxiliary inputs to an acoustic model trained on native data only in order to improve the recognition of multi-accent data comprising native, non-native, and accented speech. In addition, we leverage untranscribed accented training data by means of semi-supervised learning. Our experiments show that acoustic models trained with the proposed accent embeddings outperform those trained with conventional i-vector or x-vector speaker embeddings, and achieve a 15% relative word error rate (WER) reduction on non-native and accented speech w.r.t. acoustic models trained with regular spectral features only. Semi-supervised training using just 1 hour of untranscribed speech per accent yields an additional 15% relative WER reduction w.r.t. models trained on native data only.
Fichier principal
Vignette du fichier
cameraReady_2742.pdf (432.88 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-02907929 , version 1 (02-08-2020)

Identifiers

  • HAL Id : hal-02907929 , version 1

Cite

Mehmet Ali Tuğtekin Turan, Emmanuel Vincent, Denis Jouvet. Achieving Multi-Accent ASR via Unsupervised Acoustic Model Adaptation. INTERSPEECH 2020, Oct 2020, Shanghai, China. ⟨hal-02907929⟩
512 View
1869 Download

Share

More