Efficient constrained parametrization of GMM with class-based mixture weights for Automatic Speech Recognition
Résumé
Acoustic modeling techniques, based on clustering of the training data, have become essential in large vocabulary continuous speech recognition (LVCSR) systems. Clustered data (supervised or unsupervised) is typically used to estimate the sets of parameters by adapting the speaker-independent model on each subset. For Hidden Markov Models with Gaussian mixture observation densities (HMM-GMM) most of the adaptation techniques are focusing on re-estimation of the mean vectors, whereas the mixture weights are typically distributed almost uniformly. In this work we propose a way of specifying the subspaces of the GMM by associating the sets of Gaussian mixture weights with the speaker classes and sharing the Gaussian parameters across speaker classes. The method allows us to better parametrize GMM without increasing significantly the number of model parameters. Our experiments on French radio broadcast data demonstrate the improvement of the accuracy with such parametrization compared to the models with similar, or even larger number of parameters.