Stochastic Majorize-Minimize Subspace Algorithm with Application to Binary Classification
Résumé
In a learning context, data distribution are usually unknown. Observation models are also sometimes complex. In an inverse problem setup, these facts often lead to the minimization of a loss function with uncertain analytic expression. Consequently, its gradient cannot be evaluated in an exact manner. These issues have has promoted the development of so-called stochastic optimization methods, which are able to cope with stochastic errors in the gradient term. A natural strategy is to start from a deterministic optimization approach as a baseline, and to incorporate a stabilization procedure (e.g., decreasing stepsize, averaging) that yields improved robustness to stochastic errors. In the context of large-scale, differentiable optimization, an important class of methods relies on the principle of majorization-minimization (MM). MM algorithms are becoming increasingly popular in signal/image processing and machine learning. MM approaches are fast, stable, require limited manual settings, and are often preferred by practitioners in application domains such as medical imaging and telecommunications. The present work introduces novel theoretical convergence guarantees for MM algorithms when approximate gradient terms are employed, generalizing some recent work to a wider class of functions and algorithms. We illustrate our theoretical results with a binary classification problem.
Origine | Fichiers produits par l'(les) auteur(s) |
---|