Automatic Fairness Testing of Machine Learning Models
Résumé
In recent years, there has been an increased application of machine learning (ML) to decision making systems. This has prompted an urgent need for validating requirements on ML models. Fairness is one such requirement to be ensured in numerous application domains. It specifies a software as “learned” by an ML algorithm to not be biased in the sense of discriminating against some attributes (like gender or age), giving different decisions upon flipping the values of these attributes.In this work, we apply verification-based testing (VBT) to the fairness checking of ML models. Verification-based testing employs verification technology to generate test cases potentially violating the property under interest. For fairness testing, we additionally provide a specification language for the formalization of different fairness requirements. From the ML model under test and fairness specification VBT automatically generates test inputs specific to the specified fairness requirement. The empirical evaluation on several benchmark ML models shows verification-based testing to perform better than existing fairness testing techniques with respect to effectiveness.
Origine | Fichiers produits par l'(les) auteur(s) |
---|