Monocular Human Shape and Pose with Dense Mesh-borne Local Image Features
Résumé
We propose to improve on graph convolution based approaches for human shape and pose estimation from monocular input, using pixel-aligned local image features. Given a single input color image, existing graph convolutional network (GCN) based techniques for human shape and pose estimation (e.g. [19]) use a single convolutional neural network (CNN) generated global image feature appended to all mesh vertices equally to initialize the GCN stage, which transforms a template T-posed mesh into the target pose. In contrast, we propose for the first time the idea of using local image features per vertex. These features are sampled from the CNN image feature maps by utilizing pixel-to-mesh correspondences generated with DensePose [11]. Our quantitative and qualitative results on standard benchmarks show that using local features improves on global ones and leads to competitive performances with respect to the state-of-the-art.
Domaines
Multimédia [cs.MM]
Fichier principal
Monocular_Human_Shape_and_Pose_with_Dense_Mesh-borne_Local_Image_Features.pdf (14.68 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|