Learning algorithms for Markovian Bandits: Is Posterior Sampling more Scalable than Optimism? - Inria - Institut national de recherche en sciences et technologies du numérique
Article Dans Une Revue Transactions on Machine Learning Research Journal Année : 2022

Learning algorithms for Markovian Bandits: Is Posterior Sampling more Scalable than Optimism?

Résumé

In this paper, we study the scalability of model-based algorithms learning the optimal policy of a discounted rested Markovian bandit problem with $n$ arms. There are two categories of model-based reinforcement learning algorithms: Bayesian algorithms (like PSRL) and optimistic algorithms (like UCRL2 or UCBVI). A naive application of these algorithms is not scalable because the state space is exponential in $n$. In this paper, we construct variants of these algorithms specially tailored to Markovian bandits (MB) that we call MB-PSRL, MB-UCRL2, and MB-UCBVI. We consider an episodic setting with geometrically distributed episode length and measure the algorithm's performance in terms of regret (Bayesian regret for MB-PSRL and expected regret for MB-UCRL2 and MB-UCBVI). We prove that, for this setting, all algorithms have a low regret in $\tilde{O}(S\sqrt{nK})$ -- where $K$ is the number of episodes, $n$ is the number of arms, and $S$ is the number of states of each arm. Up to a factor $\sqrt{S}$, these regrets match the Bayesian minimax regret lower bound of $\Omega(\sqrt{SnK})$ that we also derive. Even if their theoretical regrets are comparable, the {\it time complexities} of these algorithms vary greatly: We show that MB-UCRL2 and all algorithms that use bonuses on transition matrices have a {time} complexity that grows exponentially in $n$. In contrast, MB-UCBVI does not use bonuses on transition matrices, and we show that it can be implemented efficiently, with a time complexity linear in $n$. Our numerical experiments show, however, that its empirical regret is large. Our Bayesian algorithm, MB-PSRL, enjoys the best of both worlds: its running time is linear in the number of arms, and its empirical regret is the smallest of all algorithms. This is a new addition to the understanding of the power of Bayesian algorithms, which can often be tailored to the structure of the problems to learn.
Fichier principal
Vignette du fichier
tmlr_main.pdf (1.37 Mo) Télécharger le fichier
3b4s.pdf (97.83 Ko) Télécharger le fichier
3b4s_cpt.pdf (100.2 Ko) Télécharger le fichier
3b4s_errorbar.pdf (78.8 Ko) Télécharger le fichier
9b11s.pdf (70.33 Ko) Télécharger le fichier
PSRL_corr_prior.pdf (74.86 Ko) Télécharger le fichier
PSRL_incorr_prior.pdf (70.37 Ko) Télécharger le fichier
PSRL_prior_choices.pdf (47.6 Ko) Télécharger le fichier
RandomWalk_.pdf (26.33 Ko) Télécharger le fichier
Task_Scheduling.pdf (37.79 Ko) Télécharger le fichier
gittinsPS_corr_prior.pdf (68.2 Ko) Télécharger le fichier
gittinsPS_incorr_prior.pdf (65.1 Ko) Télécharger le fichier
gittinsPS_prior_choices.pdf (44.03 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03262006 , version 1 (16-06-2021)
hal-03262006 , version 2 (02-05-2022)
hal-03262006 , version 3 (09-02-2023)

Identifiants

Citer

Nicolas Gast, Bruno Gaujal, Kimang Khun. Learning algorithms for Markovian Bandits: Is Posterior Sampling more Scalable than Optimism?. Transactions on Machine Learning Research Journal, 2022. ⟨hal-03262006v3⟩
270 Consultations
343 Téléchargements

Altmetric

Partager

More