When Should We Use Linear Explanations? - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2022

When Should We Use Linear Explanations?

Julien Delaunay
Luis Galárraga
  • Fonction : Auteur
  • PersonId : 1121104

Résumé

The increasing interest in transparent and fair AI systems has propelled the research in explainable AI (XAI). One of the main research lines in XAI is post-hoc explainability, the task of explaining the logic of an already deployed black-box model. This is usually achieved by learning an interpretable surrogate function that approximates the black box. Among the existing explanation paradigms, local linear explanations are one of the most popular due to their simplicity and fidelity. Despite their advantages, linear surrogates may not always be the most adapted method to produce reliable, i.e., unambiguous and faithful explanations. Hence, this paper introduces Adapted Post-hoc Explanations (APE), a novel method that characterizes the decision boundary of a black-box classifier and identifies when a linear model constitutes a reliable explanation. Besides, characterizing the black-box frontier allows us to provide complementary counterfactual explanations. Our experimental evaluation shows that APE identifies accurately the situations where linear surrogates are suitable while also providing meaningful counterfactual explanations.
Fichier principal
Vignette du fichier
ape-cikm2022.pdf (822.51 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03908363 , version 1 (03-01-2023)

Identifiants

Citer

Julien Delaunay, Luis Galárraga, Christine Largouët. When Should We Use Linear Explanations?. CIKM 2022 - 31st ACM International Conference on Information and Knowledge Management, ACM, Oct 2022, Atlanta, United States. pp.355-364, ⟨10.1145/3511808.3557489⟩. ⟨hal-03908363⟩
62 Consultations
115 Téléchargements

Altmetric

Partager

More