Rotting bandits are not harder than stochastic ones
Résumé
In stochastic multi-armed bandits, the reward distribution of each arm is
assumed to be stationary. This assumption is often violated in practice (e.g.,
in recommendation systems), where the reward of an arm may change whenever is
selected, i.e., rested bandit setting. In this paper, we consider the
non-parametric rotting bandit setting, where rewards can only decrease. We
introduce the filtering on expanding window average (FEWA) algorithm that
constructs moving averages of increasing windows to identify arms that are more
likely to return high rewards when pulled once more. We prove that for an
unknown horizon $T$, and without any knowledge on the decreasing behavior of
the $K$ arms, FEWA achieves problem-dependent regret bound of
$\widetilde{\mathcal{O}}(\log{(KT)}),$ and a problem-independent one of
$\widetilde{\mathcal{O}}(\sqrt{KT})$. Our result substantially improves over
the algorithm of Levine et al. (2017), which suffers regret
$\widetilde{\mathcal{O}}(K^{1/3}T^{2/3})$. FEWA also matches known bounds for
the stochastic bandit setting, thus showing that the rotting bandits are not
harder. Finally, we report simulations confirming the theoretical improvements
of FEWA.
Domaines
Machine Learning [stat.ML]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...