Class-based speech recognition using a maximum dissimilarity criterion and a tolerance classification margin
Résumé
One of the difficult problems of Automatic Speech Recognition (ASR) is dealing with the acoustic signal variability. Much state-of-the-art research has demonstrated that splitting data into classes and using a model specific to each class provides better results. However, when the dataset is not large enough and the number of classes increases, there is less data for adapting the class models and the performance degrades. This work extends and combines previous research on unsupervised splits of datasets to build maximally separated classes and the introduction of a tolerance classification margin for a better training of the class model parameters. Experiments, carried out on the French radio broadcast ESTER2 data, show an improvement in recognition results compared to the ones obtained previously. Finally, we demonstrate that combining the decoding results from different class models leads to even more significant improvements.