On the Equivalence between Herding and Conditional Gradient Algorithms - Inria - Institut national de recherche en sciences et technologies du numérique
Conference Papers Year : 2012

On the Equivalence between Herding and Conditional Gradient Algorithms

Abstract

We show that the herding procedure of Welling (2009) takes exactly the form of a standard convex optimization algorithm--namely a conditional gradient algorithm minimizing a quadratic moment discrepancy. This link enables us to invoke convergence results from convex optimization and to consider faster alternatives for the task of approximating integrals in a reproducing kernel Hilbert space. We study the behavior of the different variants through numerical simulations. The experiments indicate that while we can improve over herding on the task of approximating integrals, the original herding algorithm tends to approach more often the maximum entropy distribution, shedding more light on the learning bias behind herding.
Fichier principal
Vignette du fichier
icml2012_herding_final.pdf (381.67 Ko) Télécharger le fichier
Origin Publisher files allowed on an open archive
Loading...

Dates and versions

hal-00681128 , version 1 (20-03-2012)
hal-00681128 , version 2 (11-09-2012)

Identifiers

Cite

Francis Bach, Simon Lacoste-Julien, Guillaume Obozinski. On the Equivalence between Herding and Conditional Gradient Algorithms. ICML 2012 International Conference on Machine Learning, Jun 2012, Edimburgh, United Kingdom. ⟨hal-00681128v2⟩
380 View
769 Download

Altmetric

Share

More