An Effective Lip Tracking Algorithm for Acoustic-to-Articulatory Inversion
Abstract
Although automatic speech recognition systems can now perform well under certain conditions, they still don't provide good results in real life conditions, especially in noisy environments. Several authors have suggested that using articulatory features rather than acoustic features as a basis for speech parameterization would help yield better recognition results. The articulatory features can be recovered from the speech signal by acoustic-to-articulatory inversion. Given the acoustic signal, the recovery of the articulatory state is considered difficult. The reason is the "one-to-many" nature of the acoustic-toarticulatory inversion problem: a given articulatory state has always only one acoustic realization but an acoustic signal can be the outcome of more than one articulatory states. Since visual information is complementary to acoustic information in the inversion, lip tracking is proposed in this paper to provide visual information of lip movement for the acoustic-to-articulatory inversion. Encouraging results have proven the effectiveness of this method which provides useful information (i.e. mouth width and height) for inversion.