Temporal Shape Transfer Network for 3D Human Motion
Résumé
This paper presents a learning-based approach to perform human shape transfer between an arbitrary 3D identity mesh and a temporal motion sequence of 3D meshes. Recent approaches tackle the human shape and pose transfer on a per-frame basis and do not yet consider the valuable information about the motion dynamics, e.g., body or clothing dynamics, inherently present in motion sequences. Recent datasets provide such sequences of 3D meshes, and this work investigates how to leverage the associated intrinsic temporal features in order to improve learning-based approaches on human shape transfer. These features are expected to help preserve temporal motion and identity consistency over motion sequences. To this aim, we introduce a new network architecture that takes as input successive 3D mesh frames in a motion sequence and which decoder is conditioned on the target shape identity. Training losses are designed to enforce temporal consistency between poses as well as shape preservation over the input frames. Experiments demonstrate substantially qualitative and quantitative improvements in using temporal features compared to optimization-based and recent learning-based methods.
Fichier principal
temporal_shape_transfer_network.pdf (7.76 Mo)
Télécharger le fichier
supplementary.pdf (4.38 Mo)
Télécharger le fichier
supplementary_video.mp4 (66.78 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|