Boosting Active Learning to Optimality: a Tractable Monte-Carlo, Billiard-based Algorithm - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2009

Boosting Active Learning to Optimality: a Tractable Monte-Carlo, Billiard-based Algorithm

Résumé

Abstract. This paper focuses on Active Learning with a limited num- ber of queries; in application domains such as Numerical Engineering, the size of the training set might be limited to a few dozen or hundred exam- ples due to computational constraints. Active Learning under bounded resources is formalized as a finite horizon Reinforcement Learning prob- lem, where the sampling strategy aims at minimizing the expectation of the generalization error. A tractable approximation of the optimal (in- tractable) policy is presented, the Bandit-based Active Learner (BAAL) algorithm. Viewing Active Learning as a single-player game, BAAL com- bines UCT, the tree structured multi-armed bandit algorithm proposed by Kocsis and Szepesv´ri (2006), and billiard algorithms. A proof of a principle of the approach demonstrates its good empirical convergence toward an optimal policy and its ability to incorporate prior AL crite- ria. Its hybridization with the Query-by-Committee approach is found to improve on both stand-alone BAAL and stand-alone QbC.
Fichier principal
Vignette du fichier
BALO.pdf (211.35 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00433866 , version 1 (20-11-2009)

Identifiants

  • HAL Id : inria-00433866 , version 1

Citer

Philippe Rolet, Michèle Sebag, Olivier Teytaud. Boosting Active Learning to Optimality: a Tractable Monte-Carlo, Billiard-based Algorithm. ECML, 2009, Bled, Slovenia. pp.302-317. ⟨inria-00433866⟩
6349 Consultations
1032 Téléchargements

Partager

More