Interactive Robot Education
Résumé
Aimed at on-board robot training, an approach hybridizing active preference learning and reinforcement learning is presented: Interactive Bayesian Policy Search (IBPS) builds a robotic controller through direct and frugal interaction with the human expert, iteratively emitting preferences among a few behaviors demonstrated by the robot. These preferences allow the robot to gradually refine its policy utility estimate, and select a new policy to be demonstrated, after an Expected Utility of Selection criterion. The paper contribution is on handling the preference noise, due to expert's mistakes or disinterest when demonstrated behaviors are equally unsatisfactory. A noise model is proposed, enabling a resource-limited robot to soundly estimate the preference noise and maintain a robust interaction with the expert, thus enforcing a low sample complexity. A proof of principle of the IBPS approach, in simulation and on-board, is presented.
Domaines
Intelligence artificielle [cs.AI]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...