Neural Mesh-Based Graphics
Résumé
We revisit NPBG [2], the popular approach to novel view synthesis that introduced the ubiquitous point feature neural rendering paradigm. We are interested in particular in data-efficient learning with fast view synthesis. We achieve this through a view-dependent mesh-based denser point descriptor rasterization, in addition to a foreground/background scene rendering split, and an improved loss. By training solely on a single scene, we outperform NPBG [2], which has been trained on ScanNet [9] and then scene finetuned. We also perform competitively with respect to the state-of-the-art method SVS [42], which has been trained on the full dataset (DTU [1] and Tanks and Temples [22]) and then scene finetuned, in spite of their deeper neural renderer.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|