Sparse Optimization on Measures with Over-parameterized Gradient Descent - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Sparse Optimization on Measures with Over-parameterized Gradient Descent

Résumé

Minimizing a convex function of a measure with a sparsity-inducing penalty is a typical problem arising, e.g., in sparse spikes deconvolution or two-layer neural networks training. We show that this problem can be solved by discretizing the measure and running non-convex gradient descent on the positions and weights of the particles. For measures on a $d$-dimensional manifold and under some non-degeneracy assumptions, this leads to a global optimization algorithm with a complexity scaling as $\log(1/\epsilon)$ in the desired accuracy $\epsilon$, instead of $\epsilon^{-d}$ for convex methods. The key theoretical tools are a local convergence analysis in Wasserstein space and an analysis of a perturbed mirror descent in the space of measures. Our bounds involve quantities that are exponential in $d$ which is unavoidable under our assumptions.
Fichier principal
Vignette du fichier
sparseoptmeasure.pdf (1.11 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02190822 , version 1 (23-07-2019)
hal-02190822 , version 2 (02-11-2020)

Identifiants

Citer

Lenaic Chizat. Sparse Optimization on Measures with Over-parameterized Gradient Descent. 2020. ⟨hal-02190822v2⟩
812 Consultations
646 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More