Bandit-based Search for Constraint Programming
Résumé
Constraint Programming (CP) solvers classically explore the solution space using tree-search based heuristics. Monte-Carlo Tree-Search (MCTS) is a method aimed at optimal sequential decision making under uncertainty. It simultaneously estimates node values (with respect to some reward function) by Monte-Carlo trials and uses them to bias the exploration towards the most promising regions of the tree, borrowing the multi-armed-bandit decision rule. At the crossroads of CP and MCTS, this paper presents the Bandit Search for Constraint Programming (BASCOP) algorithm, adapting MCTS to the specifics of CP search trees. These adaptations concern i) the design of a generic reward function suited for CP problems; ii) the replacement of Monte-Carlo trials by iterations of a complete depth-first-search procedure; iii) the ability to take into account an existing value-ordering heuristics; iv) the aggregation of statistics in order to handle multiple restarts. BASCOP, using Gecode as the underlying constraint solver, shows significant improvements over the depth-first-search baseline on some CP benchmark suites, demonstrating its potential as a generic yet robust search method for CP.
Domaines
Apprentissage [cs.LG]Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|
Loading...