Towards unsupervised learning of speech features in the wild - Inria - Institut national de recherche en sciences et technologies du numérique
Conference Papers Year : 2020

Towards unsupervised learning of speech features in the wild

Abstract

Recent work on unsupervised contrastive learning of speech representation has shown promising results, but so far has mostly been applied to clean, curated speech datasets. Can it also be used with unprepared audio data "in the wild"? Here, we explore three potential problems in this setting: (i) presence of non-speech data, (ii) noisy or low quality speech data, and (iii) imbalance in speaker distribution. We show that on the Libri-light train set, which is itself a relatively clean speech-only dataset, these problems combined can already have a performance cost of up to 30% relative for the ABX score. We show that the first two problems can be alleviated by data filtering, with voice activity detection selecting speech segments, while perplexity of a model trained with clean data helping to discard entire files. We show that the third problem can be alleviated by learning a speaker embedding in the predictive branch of the model. We show that these techniques build more robust speech features that can be transferred to an ASR task in the low resource setting.
Fichier principal
Vignette du fichier
Riviere_D_2020_Towards_CPC_in_the_wild.SLT.pdf (214.32 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03070411 , version 1 (15-12-2020)

Identifiers

  • HAL Id : hal-03070411 , version 1

Cite

Morgane Rivière, Emmanuel Dupoux. Towards unsupervised learning of speech features in the wild. SLT 2020 : IEEE Spoken Language Technology Workshop, Dec 2020, Shenzhen / Virtual, China. ⟨hal-03070411⟩
95 View
768 Download

Share

More