A new non-convex framework to improve asymptotical knowledge on generic stochastic gradient descent - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

A new non-convex framework to improve asymptotical knowledge on generic stochastic gradient descent

Résumé

Stochastic gradient optimization methods are broadly used to minimize non-convex smooth objective functions, for instance when training deep neural networks. However, theoretical guarantees on the asymptotic behaviour of these methods remain scarce. Especially, ensuring almost-sure convergence of the iterates to a stationary point is quite challenging. In this work, we introduce a new Kurdyka-Łojasiewicz theoretical framework to analyze asymptotic behavior of stochastic gradient descent (SGD) schemes when minimizing non-convex smooth objectives. In particular, our framework provides new almost-sure convergence results, on iterates generated by any SGD method satisfying mild conditional descent conditions. We illustrate the proposed framework by means of several toy simulation examples. We illustrate the role of the considered theoretical assumptions, and investigate how SGD iterates are impacted whether these assumptions are either fully or partially satisfied.
Fichier principal
Vignette du fichier
MLSP_2023.pdf (327.71 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04165342 , version 1 (18-07-2023)

Licence

Paternité

Identifiants

  • HAL Id : hal-04165342 , version 1

Citer

Jean-Baptiste Fest, Audrey Repetti, Emilie Chouzenoux. A new non-convex framework to improve asymptotical knowledge on generic stochastic gradient descent. MLSP 2023 - IEEE International Workshop on Machine Learning for Signal Processing, Sep 2023, Rome, Italy. ⟨hal-04165342⟩
13 Consultations
23 Téléchargements

Partager

Gmail Facebook X LinkedIn More