Improved Learning Complexity in Combinatorial Pure Exploration Bandits - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2016

Improved Learning Complexity in Combinatorial Pure Exploration Bandits

Abstract

We study the problem of combinatorial pure exploration in the stochastic multi-armed bandit problem. We first construct a new measure of complexity that provably characterizes the learning performance of the algorithms we propose for the fixed confidence and the fixed budget setting. We show that this complexity is never higher than the one in existing work and illustrate a number of configurations in which it can be significantly smaller. While in general this improvement comes at the cost of increased computational complexity, we provide a series of examples , including a planning problem, where this extra cost is not significant.
Fichier principal
Vignette du fichier
AISTATS_full_CR.pdf (1.11 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01322198 , version 1 (26-05-2016)

Identifiers

  • HAL Id : hal-01322198 , version 1

Cite

Victor Gabillon, Alessandro Lazaric, Mohammad Ghavamzadeh, Ronald Ortner, Peter Bartlett. Improved Learning Complexity in Combinatorial Pure Exploration Bandits. Proceedings of the 19th International Conference on Artificial Intelligence (AISTATS), May 2016, Cadiz, Spain. ⟨hal-01322198⟩
173 View
36 Download

Share

Gmail Facebook Twitter LinkedIn More