Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2012

Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis

Abstract

The importance of modeling speech articulation for high-quality audiovisual (AV) speech synthesis is widely acknowledged. Nevertheless, while state-of-the-art, data-driven approaches to facial animation can make use of sophisticated motion capture techniques, the animation of the intraoral articulators (viz. the tongue, jaw, and velum) typically makes use of simple rules or viseme morphing, in stark contrast to the otherwise high quality of facial modeling. Using appropriate speech production data could significantly improve the quality of articulatory animation for AV synthesis.
Fichier principal
Vignette du fichier
abstract.pdf (415.58 Ko) Télécharger le fichier
slides.pdf (6.34 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Format : Other

Dates and versions

hal-00734464 , version 1 (22-09-2012)

Identifiers

Cite

Ingmar Steiner, Korin Richmond, Slim Ouni. Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis. 3rd International Symposium on Facial Analysis and Animation - FAA 2012, Sep 2012, Vienna, Austria. ⟨hal-00734464⟩
179 View
239 Download

Altmetric

Share

Gmail Facebook X LinkedIn More