Should penalized least squares regression be interpreted as Maximum A Posteriori estimation?
Résumé
Penalized least squares regression is often used for signal denoising and inverse problems, and is commonly interpreted in a Bayesian framework as a Maximum A Posteriori (MAP) estimator, the penalty function being the negative logarithm of the prior. For example, the widely used quadratic program (with an $\ell^1$ penalty) associated to the LASSO / Basis Pursuit Denoising is very often considered as the MAP under a Laplacian prior. The objective of this paper is to highlight the fact that, while this is {\em one} possible Bayesian interpretation, there can be other equally acceptable Bayesian interpretations. Therefore, solving a penalized least squares regression problem with penalty $\phi(x)$ should not necessarily be interpreted as assuming a prior $C\cdot \exp(-\phi(x))$ and using the MAP estimator. In particular, we show that for {\em any} prior $p_X(x)$, the conditional mean can be interpreted as a MAP with some prior $C \cdot \exp(-\phi(x))$. Vice-versa, for {\em certain} penalties $\phi(x)$, the solution of the penalized least squares problem is indeed the {\em conditional mean}, with a certain prior $p_X(x)$. In general we have $p_X(x) \neq C \cdot \exp(-\phi(x))$.
Origine | Fichiers produits par l'(les) auteur(s) |
---|