Sequential Transfer in Multi-armed Bandit with Finite Set of Models - Inria - Institut national de recherche en sciences et technologies du numérique
Conference Papers Year : 2013

Sequential Transfer in Multi-armed Bandit with Finite Set of Models

Abstract

Learning from prior tasks and transferring that experience to improve future performance is critical for building lifelong learning agents. Although results in supervised and reinforcement learning show that transfer may significantly improve the learning performance, most of the literature on transfer is focused on batch learning tasks. In this paper we study the problem of \textit{sequential transfer in online learning}, notably in the multi--armed bandit framework, where the objective is to minimize the total regret over a sequence of tasks by transferring knowledge from prior tasks. Under the assumption that the tasks are drawn from a stationary distribution over a finite set of models, we define a novel bandit algorithm based on a method-of-moments approach for the estimation of the possible tasks and derive regret bounds for it. We introduce a novel bandit algorithm based on a method-of-moments approach for estimating the possible tasks and derive regret bounds for it. Finally, we report preliminary empirical results confirming the theoretical findings.
Fichier principal
Vignette du fichier
transfer-bandit.pdf (276.83 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-00924025 , version 1 (06-01-2014)

Identifiers

  • HAL Id : hal-00924025 , version 1

Cite

Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill. Sequential Transfer in Multi-armed Bandit with Finite Set of Models. NIPS - Advances in Neural Information Processing Systems 25 - 2013, Dec 2013, Lake Tahoe, United States. ⟨hal-00924025⟩
259 View
165 Download

Share

More