CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning
Résumé
In open-ended environments, autonomous learning agents must set their own goals and build their own curriculum through an intrinsically motivatedexploration. They may consider a large diversity of goals, aiming to discover what is controllable in their environments, and what is not. Becausesome goals might prove easy and some impossible, agents must actively select which goal to practice at any moment, to maximize their overallmastery on the set of learnable goals. This paper proposes CURIOUS, an algorithm that leverages 1) a modular Universal Value Function Approxi-mator with hindsight learning to achieve a diversity of goals of different kinds within a unique policy and 2) an automated curriculum learningmechanism that biases the attention of the agent towards goals maximizing the absolute learning progress. Agents focus sequentially on goals ofincreasing complexity, and focus back on goals that are being forgotten. Experiments conducted in a new modular-goal robotic environment showthe resulting developmental self-organization of a learning curriculum, and demonstrate properties of robustness to distracting goals, forgetting andchanges in body properties.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...