Correspondence-free online human motion retargeting
Abstract
We present a data-driven framework for unsupervised human motion retargeting that animates a target subject with the motion of a source subject. Our method is correspondence-free, requiring neither spatial correspondences between the source and target shapes nor temporal correspondences between different frames of the source motion. This allows to animate a target shape with arbitrary sequences of humans in motion, possibly captured using 4D acquisition platforms or consumer devices. Our method unifies the advantages of two existing lines of work, namely skeletal motion retargeting, which leverages long-term temporal context, and surface-based retargeting, which preserves surface details, by combining a geometry-aware deformation model with a skeleton-aware motion transfer approach. This allows to take into account long-term temporal context while accounting for surface details. During inference, our method runs online,~\ie input can be processed in a serial way, and retargeting is performed in a single forward pass per frame. Experiments show that including long-term temporal context during training improves the method's accuracy for skeletal motion and detail preservation. Furthermore, our method generalizes to unobserved motions and body shapes. We demonstrate that our method achieves state-of-the-art results on two test datasets and that it can be used to animate human models with the output of a multi-view acquisition platform. Code is available at \url{https://gitlab.inria.fr/rrekikdi/human-motion-retargeting2023}.
Domains
Computer Science [cs]
Fichier principal
motion_retargeting_3DV_preprint_.pdf (3.42 Mo)
Télécharger le fichier
Correspondance-free online human retargeting.mp4 (17.91 Mo)
Télécharger le fichier
motion_retargeting_3DV_preprint_-1.pdf (3.24 Mo)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|---|
Licence |
Licence |
---|
Origin | Files produced by the author(s) |
---|---|
Licence |