Kullback Proximal Algorithms for Maximum Likelihood Estimation
Abstract
In this paper, we study the convergence of a new class of fast and stable sequential optimization methods for computing maximum likelihood estimates. These methods are based on a proximal point algorithm implemented with a Kullback-type proximal function. When the proximal regularization parameter is set to unity one obtains the classical expectation maximization (EM) algorithm. For other values of the regularization parameter, relaxed versions of EM are obtained which can have much faster convergence. In particular, if the regularization parameter vanishes at infinity, a superlinearly converging algorithm is obtained. We present an implementation of the algorithm using the trust region update strategy. For illustration the method is applied to a non-quadratic inverse problem with Poisson distributed data.