MERL: Multi-Head Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2019

MERL: Multi-Head Reinforcement Learning

Résumé

A common challenge in reinforcement learning is how to efficiently sample an environment to convert the agent's interactions into fast and robust learning, leading to high performance in complex tasks. For instance, earlier work makes use of domain/prior knowledge to improve existing reinforcement learning algorithms. While promising, previously acquired knowledge is often costly and challenging to scale up. Instead, we decide to consider the use of problem knowledge, which constitutes signals from any relevant quantity useful to solve many tasks, e.g., self-performance assessment and accurate expectations. We propose MERL, a general framework for structuring reinforcement learning by injecting problem knowledge into policy gradient updates. Unlike other auxiliary tasks methods, MERL is generally applicable to any task. As a result, policy and value functions are no longer only optimized for a reward but are learned using task-agnostic quantities. In this paper: (a) We introduce and define MERL, our new multi-head reinforcement learning framework. (b) We conduct experiments across a variety of standard benchmark environments, including 9 continuous control tasks where results show improved performance. (c) We demonstrate that MERL also improves transfer learning on a set of challenging tasks. (d) We investigate how our approach tackles the problem of reward sparsity and better condition the feature space in the context of deep reinforcement learning agents.
Fichier principal
Vignette du fichier
main.pdf (1.43 Mo) Télécharger le fichier
images/atari/4.png (464.28 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02305105 , version 1 (03-10-2019)
hal-02305105 , version 2 (13-10-2019)
hal-02305105 , version 3 (29-11-2019)

Identifiants

  • HAL Id : hal-02305105 , version 2

Citer

Yannis Flet-Berliac, Philippe Preux. MERL: Multi-Head Reinforcement Learning. NeurIPS 2019 - Deep Reinforcement Learning Workshop, Dec 2019, Vancouver, Canada. ⟨hal-02305105v2⟩
316 Consultations
1227 Téléchargements

Partager

More