On the convergence of mirror descent beyond stochastic convex programming - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Journal Articles SIAM Journal on Optimization Year : 2020

On the convergence of mirror descent beyond stochastic convex programming

Abstract

In this paper, we examine a class of nonconvex stochastic opti- mization problems which we call variationally coherent , and which properly includes all quasi-convex programs. In view of solving such problems, we focus on the widely used stochastic mirror descent (SMD) family of algorithms, and we establish that the method’s last iterate converges with probability 1 . We further introduce a localized version of variational coherence which ensures local convergence of SMD with high probability. These results contribute to the landscape of nonconvex stochastic optimization by showing that quasicon- vexity is not essential for convergence: rather, variational coherence, a much weaker requirement, suffices. Finally, building on the above, we reveal an interesting insight regarding the convergence speed of SMD: in variationally coherent problems with sharp minima (e.g. generic linear programs), the last iterate of SMD reaches an exact global optimum in a finite number of steps (a.s.), even in the presence of persistent noise. This result is to be contrasted with existing work on black-box stochastic linear programs which only exhibit asymptotic convergence rates.
Fichier principal
Vignette du fichier
NonConvexMD.pdf (6.21 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01643346 , version 1 (07-12-2020)

Identifiers

Cite

Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Stephen Boyd, Peter W. Glynn. On the convergence of mirror descent beyond stochastic convex programming. SIAM Journal on Optimization, 2020, 30 (1), pp.687-716. ⟨10.1137/17M1134925⟩. ⟨hal-01643346⟩
277 View
93 Download

Altmetric

Share

Gmail Facebook X LinkedIn More