Trying Again Instead of Trying Longer: Prior Learning for Automatic Curriculum Learning - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2020

Trying Again Instead of Trying Longer: Prior Learning for Automatic Curriculum Learning

Résumé

A major challenge in the Deep RL (DRL) community is to train agents able to generalize over unseen situations, which is often approached by training them on a diversity of tasks (or environments). A powerful method to foster diversity is to procedurally generate tasks by sampling their parameters from a multidimensional distribution, enabling in particular to propose a different task for each training episode. In practice, to get the high diversity of training tasks necessary for generalization, one has to use complex procedural generation systems. With such generators, it is hard to get prior knowledge on the subset of tasks that are actually learnable at all (many generated tasks may be unlearnable), what is their relative difficulty and what is the most efficient task distribution ordering for training. A typical solution in such cases is to rely on some form of Automated Curriculum Learning (ACL) to adapt the sampling distribution. One limit of current approaches is their need to explore the task space to detect progress niches over time, which leads to a loss of time. Additionally, we hypothesize that the induced noise in the training data may impair the performances of brittle DRL learners. We address this problem by proposing a two stage ACL approach where 1) a teacher algorithm first learns to train a DRL agent with a high-exploration curriculum, and then 2) distills learned priors from the first run to generate an "expert curriculum" to retrain the same agent from scratch. Besides demonstrating 50% improvements on average over the current state of the art, the objective of this work is to give a first example of a new research direction oriented towards refining ACL techniques over multiple learners, which we call Classroom Teaching.
Fichier principal
Vignette du fichier
2004.03168.pdf (849.67 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03099913 , version 1 (06-01-2021)

Identifiants

  • HAL Id : hal-03099913 , version 1

Citer

Rémy Portelas, Katja Hofmann, Pierre-Yves Oudeyer. Trying Again Instead of Trying Longer: Prior Learning for Automatic Curriculum Learning. ICLR 2020 BeTR-RL (Beyond “Tabula Rasa” in Reinforcement Learning ) workshop, Apr 2020, Addis Abeba / Virtual, Ethiopia. ⟨hal-03099913⟩
29 Consultations
69 Téléchargements

Partager

More