Efficient constrained parametrization of GMM with class-based mixture weights for Automatic Speech Recognition - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2013

Efficient constrained parametrization of GMM with class-based mixture weights for Automatic Speech Recognition

Arseniy Gorin
  • Fonction : Auteur
  • PersonId : 767294
  • IdRef : 182505596
Denis Jouvet

Résumé

Acoustic modeling techniques, based on clustering of the training data, have become essential in large vocabulary continuous speech recognition (LVCSR) systems. Clustered data (supervised or unsupervised) is typically used to estimate the sets of parameters by adapting the speaker-independent model on each subset. For Hidden Markov Models with Gaussian mixture observation densities (HMM-GMM) most of the adaptation techniques are focusing on re-estimation of the mean vectors, whereas the mixture weights are typically distributed almost uniformly. In this work we propose a way of specifying the subspaces of the GMM by associating the sets of Gaussian mixture weights with the speaker classes and sharing the Gaussian parameters across speaker classes. The method allows us to better parametrize GMM without increasing significantly the number of model parameters. Our experiments on French radio broadcast data demonstrate the improvement of the accuracy with such parametrization compared to the models with similar, or even larger number of parameters.
Fichier non déposé

Dates et versions

hal-00923202 , version 1 (02-01-2014)

Identifiants

  • HAL Id : hal-00923202 , version 1

Citer

Arseniy Gorin, Denis Jouvet. Efficient constrained parametrization of GMM with class-based mixture weights for Automatic Speech Recognition. LTC'13 - 6th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, Dec 2013, Poznań, Poland. ⟨hal-00923202⟩
149 Consultations
0 Téléchargements

Partager

More