Learning Generalizable Light Field Networks from Few Images
Résumé
We explore a new strategy for few-shot novel view synthesis based on a neural light field representation. Given a target camera pose, an implicit neural network maps each ray to its target pixel’s color directly. The network is conditioned on local ray features generated by coarse volumetric rendering from an explicit 3D feature volume. This volume is built from the input images using a 3D ConvNet. Our method achieves competitive performances on real MVS data with respect to state-of-the-art neural radiance field based competition, while offering a roughly 50 times faster rendering.
Fichier principal
Learning_Generalizable_Light_Field_Networks_from_Few_Images.pdf (3.83 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|