Structural Learning of Dynamic Bayesian Networks in Speech Recognition
Résumé
We present a speech modeling methodology where no a priori assumption is made on the dependencies between the observed and the hidden speech processes. Rather, dependencies are learned form data. This methodology guaranties improvement in modeling fidelity compared to HMMs. In addition, it gives the user a control on the trad-off between modeling accuracy and model complexity. Furthermore, the approach is technicaly very attractive because all the computational effort is made in the traning phase.