Adaptive and optimal online linear regression on $\ell^1$-balls
Abstract
We consider the problem of online linear regression on individual sequences. The goal in this paper is for the forecaster to output sequential predictions which are, after $T$ time rounds, almost as good as the ones output by the best linear predictor in a given $\ell^1$-ball in $\\R^d$. We consider both the cases where the dimension~$d$ is small and large relative to the time horizon $T$. We first present regret bounds with optimal dependencies on $d$, $T$, and on the sizes $U$, $X$ and $Y$ of the $\ell^1$-ball, the input data and the observations. The minimax regret is shown to exhibit a regime transition around the point $d = \sqrt{T} U X / (2 Y)$. Furthermore, we present efficient algorithms that are adaptive, \ie, that do not require the knowledge of $U$, $X$, $Y$, and $T$, but still achieve nearly optimal regret bounds.
Origin | Files produced by the author(s) |
---|