Simpler PAC-Bayesian Bounds for Hostile Data
Résumé
PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution $\rho$ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution $\pi$. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as \emph{hostile data}). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csisz\'ar's $f$-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...