Dissecting Causal Biases - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Dissecting Causal Biases

Résumé

Accurately measuring discrimination in machine learning-based automated decision systems is required to address the vital issue of fairness between subpopulations and/or individuals. Any bias in measuring discrimination can lead to either amplification or underestimation of the true value of discrimination. This paper focuses on a class of bias originating in the way training data is generated and/or collected. We call such class causal biases and use tools from the field of causality to formally define and analyze such biases. Four sources of bias are considered, namely, confounding, selection, measurement, and interaction. The main contribution of this paper is to provide, for each source of bias, a closed-form expression in terms of the model parameters. This makes it possible to analyze the behavior of each source of bias, in particular, in which cases they are absent and in which other cases they are maximized. We hope that the provided characterizations help the community better understand the sources of bias in machine learning applications.
Fichier principal
Vignette du fichier
Causal_Biases_Hal-23.pdf (5.16 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04329098 , version 1 (07-12-2023)
hal-04329098 , version 2 (21-01-2024)

Licence

Paternité

Identifiants

  • HAL Id : hal-04329098 , version 2

Citer

Rūta Binkytė, Sami Zhioua, Yassine Turki. Dissecting Causal Biases. 2024. ⟨hal-04329098v2⟩
57 Consultations
29 Téléchargements

Partager

Gmail Facebook X LinkedIn More