From attribute-labels to faces: face generation using a conditional generative adversarial network - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2018

From attribute-labels to faces: face generation using a conditional generative adversarial network

Résumé

Facial attributes are instrumental in semantically characterizing faces. Automated classification of such attributes (i.e., age, gender, ethnicity) has been a well studied topic. We here seek to explore the inverse problem, namely given attribute-labels the generation of attribute-associated faces. The interest in this topic is fueled by related applications in law enforcement and entertainment. In this work, we propose two models for attribute-label based facial image and video generation incorporating 2D and 3D deep conditional generative adversarial networks (DCGAN). The attribute-labels serve as a tool to determine the specific representations of generated images and videos. While these are early results, our findings indicate the methods' ability to generate realistic faces from attribute labels.
Fichier principal
Vignette du fichier
Wang_Dantcheva_Bremond_ECCVW_18.pdf (7.22 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01894150 , version 1 (12-10-2018)

Identifiants

  • HAL Id : hal-01894150 , version 1

Citer

Yaohui Wang, Antitza Dantcheva, Francois Bremond. From attribute-labels to faces: face generation using a conditional generative adversarial network. ECCVW'18, 5th Women in Computer Vision (WiCV) Workshop in conjunction with the European Conference on Computer Vision, Sep 2018, Munich, Germany. ⟨hal-01894150⟩
119 Consultations
334 Téléchargements

Partager

More