Autotelic LLM-based exploration for goal-conditioned RL
Résumé
Designing autotelic agents capable of autonomously generating and pursuing their own goals represents a promising endeavor for open-ended learning and skill acquisition in reinforcement learning. This challenge is especially difficult in open worlds that require inventing new previously unobserved goals. In this work, we propose an architecture where a single generalist autotelic agent is trained on an automatic curriculum of goals. We leverage large language models (LLMs) to generate goals as code for reward functions based on learnability and difficulty estimates. The goal-conditioned RL agent is trained on those goals sampled based on learning progress. We compare our method to an adaptation of OMNI-EPIC to goal-conditioned RL. Our preliminary experiments imply that our method generates a higher proportion of learnable goals, suggesting better adaptation to the goalconditioned learner.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|