CopyCAT: Taking Control of Neural Policies with Constant Attacks - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year :

CopyCAT: Taking Control of Neural Policies with Constant Attacks

Léonard Hussenot
  • Function : Author
  • PersonId : 1092830
Matthieu Geist
Olivier Pietquin

Abstract

We propose a new perspective on adversarial attacks against deep reinforcement learning agents. Our main contribution is CopyCAT, a targeted attack able to consistently lure an agent into following an outsider's policy. It is pre-computed, therefore fast inferred, and could thus be usable in a real-time scenario. We show its effectiveness on Atari 2600 games in the novel read-only setting. In this setting, the adversary cannot directly modify the agent's state -- its representation of the environment -- but can only attack the agent's observation -- its perception of the environment. Directly modifying the agent's state would require a write-access to the agent's inner workings and we argue that this assumption is too strong in realistic settings.
Fichier principal
Vignette du fichier
1905.12282.pdf (3.52 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03162124 , version 1 (08-03-2021)

Identifiers

Cite

Léonard Hussenot, Matthieu Geist, Olivier Pietquin. CopyCAT: Taking Control of Neural Policies with Constant Attacks. AAMAS 2020 - 19th International Conference on Autonomous Agents and Multi-Agent Systems, May 2020, Virtual, New Zealand. ⟨hal-03162124⟩
47 View
132 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More