Fully Automated Non-Native Speech Recognition Using Confusion-Based Acoustic Model Integration And Graphemic Constraints
Résumé
This paper presents a fully automated approach for the recognition of non-native speech based on acoustic model modification. For a native language (L1) and a spoken language (L2), pronunciation variants of the phones of L2 are automatically extracted from an existing non-native database as a confusion matrix with sequences of phones of L1. This is done using L1's and L2's ASR systems. This confusion concept deals with the problem of non existence of match between some L2 and L1 phones. The confusion matrix is then used to modify the acoustic models (HMMs) of L2 phones by integrating corresponding L1 phone models as alternative HMM paths. We introduce graphemic contraints in the confusion extraction process: the phonetic confusion is established for each couple of `L2-phone' and the grapheme(s) correspondig to that phone. We claim that prononciation errors may depend on the graphemes related to each phone. The modified ASR system achieved an improvement between 32% and 40% (relative, L1=French and L2=English) in WER on the French non-native database used for testing. The introduction of graphemic contraints in the phonetic confusion allowed further improvements.
Domaines
Informatique et langage [cs.CL]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...