An Untrained Neural Network Prior for Light Field Compression - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Journal Articles IEEE Transactions on Image Processing Year : 2022

An Untrained Neural Network Prior for Light Field Compression

Abstract

Deep generative models have proven to be effective priors for solving a variety of image processing problems. However, the learning of realistic image priors, based on a large number of parameters, requires a large amount of training data. It has been shown recently, with the so-called deep image prior (DIP), that randomly initialized neural networks can act as good image priors without learning. In this paper, we propose a deep generative model for light fields, which is compact and which does not require any training data other than the light field itself. To show the potential of the proposed generative model, we develop a complete light field compression scheme with quantization-aware learning and entropy coding of the quantized weights. Experimental results show that the proposed method yields very competitive results compared with state-of-the-art light field compression methods, both in terms of PSNR and MS-SSIM metrics.
Fichier principal
Vignette du fichier
DDLF_Final.pdf (7.96 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03820927 , version 1 (19-10-2022)

Identifiers

  • HAL Id : hal-03820927 , version 1

Cite

Xiaoran Jiang, Jinglei Shi, Christine Guillemot. An Untrained Neural Network Prior for Light Field Compression. IEEE Transactions on Image Processing, 2022, pp.1-15. ⟨hal-03820927⟩
79 View
70 Download

Share

Gmail Facebook Twitter LinkedIn More