Light Field Compression via Compact Neural Scene Representation
Résumé
In this paper, we propose a novel light field compression method based on a low rank-constrained neural scene representation. While most existing methods directly compress the light field views, our method first learns a Multi-Layer Perceptron (MLP)-based Neural Radiance Field (NeRF) from the input views. To be able to efficiently compress the NeRF scene representation, the weights of the MLP are optimized under a low-rank constraint using the Alternating Direction Method of Multipliers (ADMM) optimization method. The weights of NeRF are then decomposed into Tensor Train (TT) components which allow us to distill original NeRF network into a slimmer one. The slim NeRF is then refined using a quantization-aware training procedure. Experimental results show that this low rank-constrained NeRF-based light field compression method can achieve better rate-distortion than reference methods, while keeping the free-viewpoint reconstruction capability.
Origine | Fichiers produits par l'(les) auteur(s) |
---|