Use of variance estimation in the multi-armed bandit problem - Inria - Institut national de recherche en sciences et technologies du numérique
Autre Publication Année : 2006

Use of variance estimation in the multi-armed bandit problem

Résumé

An important aspect of most decision making problems concerns the appropriate balance between exploitation (acting optimally according to the partial knowledge acquired so far) and exploration of the environment (acting sub-optimally in order to refine the current knowledge and improve future decisions). A typical example of this so-called exploration versus exploitation dilemma is the multi-armed bandit problem, for which many strategies have been developed. Here we investigate policies based the choice of the arm having the highest upper-confidence bound, where the bound takes into account the empirical variance of the different arms. Such an algorithm was found earlier to outperform its peers in a series of numerical experiments. The main contribution of this paper is the theoretical investigation of this algorithm. Our contribution here is twofold. First, we prove that with probability at least $1-\beta$, the regret after $n$ plays of a variant of the UCB algorithm (called $\beta$-UCB) is upper-bounded by a constant, that scales linearly with $\log(1/\beta)$, but which is independent from $n$. We also analyse a variant which is closer to the algorithm suggested earlier. We prove a logarithmic bound on the expected regret of this algorithm and argue that the bound scales favourably with the variance of the suboptimal arms.
Fichier principal
Vignette du fichier
ucbtuned.pdf (124.66 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00203496 , version 1 (10-01-2008)

Identifiants

  • HAL Id : inria-00203496 , version 1

Citer

Jean-Yves Audibert, Rémi Munos, Csaba Szepesvari. Use of variance estimation in the multi-armed bandit problem. 2006. ⟨inria-00203496⟩
326 Consultations
310 Téléchargements

Partager

More