Fairness-Aware Training of Decision Trees by Abstract Interpretation - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2021

Fairness-Aware Training of Decision Trees by Abstract Interpretation

Résumé

We study the problem of formally verifying individual fairness of decision tree ensembles, as well as training tree models which maximize both accuracy and individual fairness. In our approach, fairness verification and fairness-aware training both rely on a notion of stability of a classifier, which is a generalization of the standard notion of robustness to input perturbations used in adversarial machine learning. Our verification and training methods leverage abstract interpretation, a well-established mathematical framework for designing computable, correct, and precise approximations of potentially infinite behaviors. We implemented our fairness-aware learning method by building on a tool for adversarial training of decision trees. We evaluated it in practice on the reference datasets in the literature on fairness in machine learning. The experimental results show that our approach is able to train tree models exhibiting a high degree of individual fairness with respect to the natural state-of-the-art CART trees and random forests. Moreover, as a by-product, these fairness-aware decision trees turn out to be significantly compact, which naturally enhances their interpretability.
Fichier principal
Vignette du fichier
hal.pdf (575.76 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03545701 , version 1 (05-04-2022)

Identifiants

Citer

Francesco Ranzato, Caterina Urban, Marco Zanella. Fairness-Aware Training of Decision Trees by Abstract Interpretation. CIKM 2021 - 30th ACM International Conference on Information and Knowledge Management, Nov 2021, Queensland / Virtual, Australia. pp.1508-1517, ⟨10.1145/3459637.3482342⟩. ⟨hal-03545701⟩
48 Consultations
165 Téléchargements

Altmetric

Partager

More