Better state exploration using action sequence equivalence - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Better state exploration using action sequence equivalence

Résumé

Incorporating prior knowledge in reinforcement learning algorithms is mainly an open question. Even when insights about the environment dynamics are available, reinforcement learning is traditionally used in a tabula rasa setting and must explore and learn everything from scratch. In this paper, we consider the problem of exploiting priors about action sequence equivalence: that is, when different sequences of actions produce the same effect. We propose a new local exploration strategy calibrated to minimize collisions and maximize new state visitations. We show that this strategy can be computed at little cost, by solving a convex optimization problem. By replacing the usual ϵ-greedy strategy in a DQN, we demonstrate its potential in several environments with various dynamic structures.
Fichier principal
Vignette du fichier
iclr2023_conference.pdf (687.67 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03920349 , version 1 (03-01-2023)

Identifiants

  • HAL Id : hal-03920349 , version 1

Citer

Nathan Grinsztajn, Toby Johnstone, Johan Ferret, Philippe Preux. Better state exploration using action sequence equivalence. NeurIPS 2022 - Deep Reinforcement Learning Workshop, Dec 2022, Virtual, United States. ⟨hal-03920349⟩
40 Consultations
82 Téléchargements

Partager

Gmail Facebook X LinkedIn More