Journal Articles Journal of Machine Learning Research Year : 2024

On Tail Decay Rate Estimation of Loss Function Distributions

Abstract

The study of loss-function distributions is critical to characterize a model’s behaviour on a given machine-learning problem. While model quality is commonly measured by the average loss assessed on a testing set, this quantity does not ascertain the existence of the mean of the loss distribution. Conversely, the existence of a distribution’s statistical moments can be verified by examining the thickness of its tails. Cross-validation schemes determine a family of testing loss distributions conditioned on the training sets. By marginalizing across training sets, we can recover the overall (marginal) loss distribution, whose tail-shape we aim to estimate. Small sample-sizes diminish the reliability and efficiency of classical tail-estimation methods like Peaks-Over-Threshold, and we demonstrate that this effect is notably significant when estimating tails of marginal distributions composed of conditional distributions with substantial tail location variability. We mitigate this problem by utilizing a result we prove: under certain conditions, the marginal-distribution’s tail-shape parameter is the maximum tail-shape parameter across the conditional distributions underlying the marginal. We label the resulting approach as ‘cross-tail estimation (CTE)’. We test CTE in a series of experiments on simulated and real data, showing the improved robustness and quality of tail estimation as compared to classical approaches.
Fichier principal
Vignette du fichier
On_Tail_Decay Rate_Estimation_of_Loss_Funct_on_Distributions.pdf (6.6 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03911884 , version 1 (23-12-2022)
hal-03911884 , version 2 (23-12-2023)
hal-03911884 , version 3 (16-05-2024)

Licence

Identifiers

  • HAL Id : hal-03911884 , version 3

Cite

Etrit Haxholli, Marco Lorenzi. On Tail Decay Rate Estimation of Loss Function Distributions. Journal of Machine Learning Research, 2024, 25 (25), pp.1−47. ⟨hal-03911884v3⟩
259 View
150 Download

Share

More