Explainability in Deep Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Journal Articles Knowledge-Based Systems Year : 2021

Explainability in Deep Reinforcement Learning


A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature relevance techniques to explain a deep neural network (DNN) output or explaining models that ingest image source data. However, assessing how XAI techniques can help understand models beyond classification tasks, e.g. for reinforcement learning (RL), has not been extensively studied. We review recent works in the direction to attain Explainable Reinforcement Learning (XRL), a relatively new subfield of Explainable Artificial Intelligence, intended to be used in general public applications, with diverse audiences, requiring ethical, responsible and trustable algorithms. In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box. We evaluate mainly studies directly linking explainability to RL, and split these into two categories according to the way the explanations are generated: transparent algorithms and post-hoc explainaility. We also review the most prominent XAI works from the lenses of how they could potentially enlighten the further deployment of the latest advances in RL, in the demanding present and future of everyday problems.
Fichier principal
Vignette du fichier
XAI_in_RL__Elsevier_typeset_.pdf (1.15 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03059366 , version 1 (12-12-2020)



Alexandre Heuillet, Fabien Couthouis, Natalia Díaz-Rodríguez. Explainability in Deep Reinforcement Learning. Knowledge-Based Systems, 2021, ⟨10.1016/j.knosys.2020.106685⟩. ⟨hal-03059366⟩
343 View
1023 Download



Gmail Mastodon Facebook X LinkedIn More