Riemannian stochastic optimization methods avoid strict saddle points - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Riemannian stochastic optimization methods avoid strict saddle points

Résumé

Many modern machine learning applications - from online principal component analysis to covariance matrix identification and dictionary learning - can be formulated as minimization problems on Riemannian manifolds, and are typically solved with a Riemannian stochastic gradient method (or some variant thereof). However, in many cases of interest, the resulting minimization problem is not geodesically convex, so the convergence of the chosen solver to a desirable solution - i.e., a local minimizer - is by no means guaranteed. In this paper, we study precisely this question, that is, whether stochastic Riemannian optimization algorithms are guaranteed to avoid saddle points with probability 1. For generality, we study a family of retraction-based methods which, in addition to having a potentially much lower per-iteration cost relative to Riemannian gradient descent, include other widely used algorithms, such as natural policy gradient methods and mirror descent in ordinary convex spaces. In this general setting, we show that, under mild assumptions for the ambient manifold and the oracle providing gradient information, the policies under study avoid strict saddle points / submanifolds with probability 1, from any initial condition. This result provides an important sanity check for the use of gradient methods on manifolds as it shows that, almost always, the limit state of a stochastic Riemannian algorithm can only be a local minimizer.
Fichier principal
Vignette du fichier
Main.pdf (5.06 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY - Paternité

Dates et versions

hal-04307876 , version 1 (26-11-2023)

Licence

Paternité

Identifiants

Citer

Ya-Ping Hsieh, Mohammad Reza Karimi, Andreas Krause, Panayotis Mertikopoulos. Riemannian stochastic optimization methods avoid strict saddle points. NeurIPS 2023 - 37th Conference on Neural Information Processing Systems, Dec 2023, New Orleans (LA), United States. pp.1-27. ⟨hal-04307876⟩
45 Consultations
13 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More