Efficient improper learning for online logistic regression - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year :

Efficient improper learning for online logistic regression

Abstract

We consider the setting of online logistic regression and consider the regret with respect to the 2-ball of radius B. It is known (see [Hazan et al., 2014]) that any proper algorithm which has logarithmic regret in the number of samples (denoted n) necessarily suffers an exponential multiplicative constant in B. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, [Foster et al., 2018] showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as O(B log(Bn)) with a per-round time-complexity of order O(d^2).
Fichier principal
Vignette du fichier
OnlineLogistic.pdf (374.14 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02510505 , version 1 (17-03-2020)
hal-02510505 , version 2 (19-03-2020)
hal-02510505 , version 3 (02-11-2020)

Identifiers

Cite

Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi. Efficient improper learning for online logistic regression. COLT 2020 - 33rd Annual Conference on Learning Theory, Jul 2020, Graz / Virtual, Austria. ⟨hal-02510505v3⟩
2383 View
226 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More