A Bio-inspired Synergistic Virtual Retina Model for Tone Mapping
Résumé
Real-world radiance values span several orders of magnitudes which have to be processed by artificial systems in order to capture visual scenes with a high visual sensitivity. Interestingly, it has been found that similar processing happens in biological systems, starting at the retina level. So our motivation in this paper is to develop a new video tone mapping operator (TMO) based on a synergistic model of the retina. We start from the so-called Virtual Retina model, which has been developed in computational neuroscience. We show how to enrich this model with new features to use it as a TMO, such as color management, luminance adaptation at photoreceptor level and readout from a heterogeneous population activity. Our method works for video but can also be applied to static images (by repeating images in time). It has been carefully evaluated on standard benchmarks in the static case, giving comparable results to the state-of-the-art using default parameters, while offering user control for finer tuning. Results on HDR videos are also promising, specifically w.r.t. temporal luminance coherency. As a whole, this paper shows a promising way to address computational photography challenges by exploiting the current research in neuroscience about retina processing.
Origine | Fichiers produits par l'(les) auteur(s) |
---|