Empirical Risk Minimization with Relative Entropy Regularization: Optimality and Sensitivity Analysis
Résumé
The optimality and sensitivity of the empirical risk minimization problem with relative entropy regularization (ERM-RER) are investigated for the case in which the reference is a $\sigma$-finite measure instead of a probability measure. This generalization allows for a larger degree of flexibility in the incorporation of prior knowledge over the set of models. In this setting, the interplay of the regularization parameter, the reference measure, the risk function, and the empirical risk induced by the solution of the ERM-RER problem is characterized. This characterization yields necessary and sufficient conditions for the existence of regularization parameters that achieve arbitrarily small empirical risk with arbitrarily high probability. Additionally, the sensitivity of the expected empirical risk to deviations from the solution of the ERM-RER problem is studied. Dataset-dependent and dataset-independent upper bounds on the absolute value of the sensitivity are presented. In a special case, it is shown that the expectation (with respect to the datasets) of the absolute value of the sensitivity is upper bounded, up to a constant factor, by the square root of the lautum information between the models and the datasets.
Origine | Fichiers produits par l'(les) auteur(s) |
---|