Adaptation of AI Explanations to Users' Roles - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Adaptation of AI Explanations to Users' Roles

Julien Delaunay
Luis Galárraga
Niels Van Berkel
  • Fonction : Auteur
  • PersonId : 1310802

Résumé

Surrogate explanations approximate a complex model by training a simpler model over an interpretable space. Among these simpler models, we identify three kinds of surrogate methods: (a) feature-attribution, (b) example-based, and (c) rule-based explanations. Each surrogate approximates the complex model differently, and we hypothesise that this can impact how users interpret the explanation. Despite the numerous calls for introducing explanations for all, no prior work has compared the impact of these surrogates on specific user roles (e.g., domain expert, developer). In this article, we outline a study design to assess the impact of these three surrogate techniques across different user roles.
Fichier principal
Vignette du fichier
chi2023f.pdf (885.06 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04388942 , version 1 (11-01-2024)

Licence

Paternité

Identifiants

  • HAL Id : hal-04388942 , version 1

Citer

Julien Delaunay, Christine Largouët, Luis Galárraga, Niels Van Berkel. Adaptation of AI Explanations to Users' Roles. HCXAI 2023 - Workshop on Human-Centered Explainable AI, Apr 2023, Hamburg, Germany. pp.1-7. ⟨hal-04388942⟩
32 Consultations
12 Téléchargements

Partager

Gmail Facebook X LinkedIn More