CopyCAT: Taking Control of Neural Policies with Constant Attacks - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

CopyCAT: Taking Control of Neural Policies with Constant Attacks

Léonard Hussenot
  • Fonction : Auteur
  • PersonId : 1092830
Matthieu Geist
Olivier Pietquin

Résumé

We propose a new perspective on adversarial attacks against deep reinforcement learning agents. Our main contribution is CopyCAT, a targeted attack able to consistently lure an agent into following an outsider's policy. It is pre-computed, therefore fast inferred, and could thus be usable in a real-time scenario. We show its effectiveness on Atari 2600 games in the novel read-only setting. In this setting, the adversary cannot directly modify the agent's state -- its representation of the environment -- but can only attack the agent's observation -- its perception of the environment. Directly modifying the agent's state would require a write-access to the agent's inner workings and we argue that this assumption is too strong in realistic settings.
Fichier principal
Vignette du fichier
1905.12282.pdf (3.52 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03162124 , version 1 (08-03-2021)

Identifiants

Citer

Léonard Hussenot, Matthieu Geist, Olivier Pietquin. CopyCAT: Taking Control of Neural Policies with Constant Attacks. AAMAS 2020 - 19th International Conference on Autonomous Agents and Multi-Agent Systems, May 2020, Virtual, New Zealand. ⟨hal-03162124⟩
53 Consultations
140 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More