LIA: Latent Image Animator - Inria - Institut national de recherche en sciences et technologies du numérique
Article Dans Une Revue IEEE Transactions on Pattern Analysis and Machine Intelligence Année : 2024

LIA: Latent Image Animator

Résumé

Previous animation techniques mainly focus on leveraging explicit structure representations (e.g., meshes or keypoints) for transferring motion from driving videos to source images. However, such methods are challenged with large appearance variations between source and driving data, as well as require complex additional modules to respectively model appearance and motion. Towards addressing these issues, we introduce the Latent Image Animator (LIA), streamlined to animate high-resolution images. LIA is designed as a simple autoencoder that does not rely on explicit representations. Motion transfer in the pixel space is modeled as linear navigation of motion codes in the latent space. Specifically such navigation is represented as an orthogonal motion dictionary learned in a self-supervised manner based on proposed Linear Motion Decomposition (LMD). Extensive experimental results demonstrate that LIA outperforms state-of-the-art on VoxCeleb, TaichiHD, and TED-talk datasets with respect to video quality and spatiotemporal consistency. In addition LIA is well equipped for zeroshot high-resolution image animation. Code, models and demo videos are available at https://wyhsirius.github.io/LIA-project/.
Fichier principal
Vignette du fichier
LIA_PAMI.pdf (6.78 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04837897 , version 1 (14-12-2024)

Licence

Identifiants

Citer

Yaohui Wang, Di Yang, Francois Bremond, Antitza Dantcheva. LIA: Latent Image Animator. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46 (12), pp.10829-10844. ⟨10.1109/TPAMI.2024.3449075⟩. ⟨hal-04837897⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More