PnP-ReG: Learned Regularizing Gradient for Plug-and-Play Gradient Descent
Abstract
The Plug-and-Play (PnP) framework makes it possible to integrate advanced image denoising priors into optimization algorithms, to efficiently solve a variety of image restoration tasks generally formulated as Maximum A Posteriori (MAP) estimation problems. The Plug-and-Play alternating direction method of multipliers (ADMM) and the Regularization by Denoising (RED) algorithms are two examples of such methods that made a breakthrough in image restoration. However, the former Plug-and-Play approach only applies to proximal algorithms. And while the explicit regularization in RED can be used in various algorithms, including gradient descent, the gradient of the regularizer computed as a denoising residual leads to several approximations of the underlying image prior in the MAP interpretation of the denoiser. We show that it is possible to train a network directly modeling the gradient of a MAP regularizer while jointly training the corresponding MAP denoiser. We use this network in gradient-based optimization methods and obtain better results comparing to other generic Plug-and-Play approaches. We also show that the regularizer can be used as a pre-trained network for unrolled gradient descent. Lastly, we show that the resulting denoiser allows for a better convergence of the Plug-and-Play ADMM.
Origin : Files produced by the author(s)