On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss

Résumé

The calibration of predictive distributions has been widely studied in deep learning, but the same cannot be said about the more specific epistemic uncertainty as produced by Deep Ensembles, Bayesian Deep Networks, or Evidential Deep Networks. Although measurable, this form of uncertainty is difficult to calibrate on an objective basis as it depends on the prior for which a variety of choices exist. Nevertheless, epistemic uncertainty must in all cases satisfy two formal requirements: firstly, it must decrease when the training dataset gets larger and, secondly, it must increase when the model expressiveness grows. Despite these expectations, our experimental study shows that on several reference datasets and models, measures of epistemic uncertainty violate these requirements, sometimes presenting trends completely opposite to those expected. These paradoxes between expectation and reality raise the question of the true utility of epistemic uncertainty as estimated by these models. A formal argument suggests that this disagreement is due to a poor approximation of the posterior distribution rather than to a flaw in the measure itself. Based on this observation, we propose a regularization function for deep ensembles, called conflictual loss in line with the above requirements. We emphasize its strengths by showing experimentally that it fulfills both requirements of epistemic uncertainty, without sacrificing either the performance nor the calibration of the deep ensembles.
Fichier principal
Vignette du fichier
ECML_2024.pdf (5.19 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04695978 , version 1 (12-09-2024)

Licence

Identifiants

  • HAL Id : hal-04695978 , version 1

Citer

Mohammed Fellaji, F. Pennerath, Brieuc Conan-Guez, Miguel Couceiro. On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss. Machine Learning and Knowledge Discovery in Databases. Research Track European Conference, ECML-PKDD 2024, Sep 2024, Vilnius, Lithuania. pp.160-176. ⟨hal-04695978⟩
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More