Neural Mesh-Based Graphics - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2023

Neural Mesh-Based Graphics

Abstract

We revisit NPBG [2], the popular approach to novel view synthesis that introduced the ubiquitous point feature neural rendering paradigm. We are interested in particular in data-efficient learning with fast view synthesis. We achieve this through a view-dependent mesh-based denser point descriptor rasterization, in addition to a foreground/background scene rendering split, and an improved loss. By training solely on a single scene, we outperform NPBG [2], which has been trained on ScanNet [9] and then scene finetuned. We also perform competitively with respect to the state-of-the-art method SVS [42], which has been trained on the full dataset (DTU [1] and Tanks and Temples [22]) and then scene finetuned, in spite of their deeper neural renderer.
Fichier principal
Vignette du fichier
ECCV_Hal.pdf (126.6 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03942106 , version 1 (16-01-2023)

Identifiers

Cite

Shubhendu Jena, Franck Multon, Adnane Boukhayma. Neural Mesh-Based Graphics. ECCV 2022 Workshops, Oct 2022, Tel-Aviv, Israel. pp.739-757, ⟨10.1007/978-3-031-25066-8_45⟩. ⟨hal-03942106⟩
70 View
4 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More