On the Evaluation of the Plausibility and Faithfulness of Sentiment Analysis Explanations - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

On the Evaluation of the Plausibility and Faithfulness of Sentiment Analysis Explanations

Julia El Zini
  • Fonction : Auteur
  • PersonId : 1318729
Mohamad Mansour
  • Fonction : Auteur
  • PersonId : 1406785
Basel Mousi
  • Fonction : Auteur
  • PersonId : 1406786
Mariette Awad
  • Fonction : Auteur
  • PersonId : 1318731

Résumé

With the pervasive use of Sentiment Analysis (SA) models in financial and social settings, performance is no longer the sole concern for reliable and accountable deployment. SA models are expected to explain their behavior and highlight textual evidence of their predictions. Recently, Explainable AI (ExAI) is enabling the “third AI wave” by providing explanations for the highly non-linear black-box deep AI models. Nonetheless, current ExAI methods, especially in the NLP field, are conducted on various datasets by employing different metrics to evaluate several aspects. The lack of a common evaluation framework is hindering the progress tracking of such methods and their wider adoption.In this work, inspired by offline information retrieval, we propose different metrics and techniques to evaluate the explainability of SA models from two angles. First, we evaluate the strength of the extracted “rationales” in faithfully explaining the predicted outcome. Second, we measure the agreement between ExAI methods and human judgment on a homegrown dataset (Dataset and code available at https://gitlab.com/awadailab/exai-nlp-eval ) to reflect on the rationales plausibility. Our conducted experiments comprise four dimensions: (1) the underlying architectures of SA models, (2) the approach followed by the ExAI method, (3) the reasoning difficulty, and (4) the homogeneity of the ground-truth rationales.We empirically demonstrate that anchors explanations are more aligned with the human judgment and can be more confident in extracting supporting rationales. As can be foreseen, the reasoning complexity of sentiment is shown to thwart ExAI methods from extracting supporting evidence. Moreover, a remarkable discrepancy is discerned between the results of different explainability methods on the various architectures suggesting the need for consolidation to observe enhanced performance. Predominantly, transformers are shown to exhibit better explainability than convolutional and recurrent architectures. Our work paves the way towards designing more interpretable NLP models and enabling a common evaluation ground for their relative strengths and robustness.
Fichier sous embargo
Fichier sous embargo
0 4 6
Année Mois Jours
Avant la publication
mercredi 1 janvier 2025
Fichier sous embargo
mercredi 1 janvier 2025
Connectez-vous pour demander l'accès au fichier

Dates et versions

hal-04668678 , version 1 (07-08-2024)

Licence

Identifiants

Citer

Julia El Zini, Mohamad Mansour, Basel Mousi, Mariette Awad. On the Evaluation of the Plausibility and Faithfulness of Sentiment Analysis Explanations. 18th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), Jun 2022, Hersonissos, Greece. pp.338-349, ⟨10.1007/978-3-031-08337-2_28⟩. ⟨hal-04668678⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More