Mixability in Statistical Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2012

Mixability in Statistical Learning

Résumé

Statistical learning and sequential prediction are two different but related for- malisms to study the quality of predictions. Mapping out their relations and trans- ferring ideas is an active area of investigation. We provide another piece of the puzzle by showing that an important concept in sequential prediction, the mixa- bility of a loss, has a natural counterpart in the statistical setting, which we call stochastic mixability. Just as ordinary mixability characterizes fast rates for the worst-case regret in sequential prediction, stochastic mixability characterizes fast rates in statistical learning. We show that, in the special case of log-loss, stochastic mixability reduces to a well-known (but usually unnamed) martingale condition, which is used in existing convergence theorems for minimum description length and Bayesian inference. In the case of 0/1-loss, it reduces to the margin condition of Mammen and Tsybakov, and in the case that the model under consideration contains all possible predictors, it is equivalent to ordinary mixability.
Fichier principal
Vignette du fichier
stochmix.pdf (318.07 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00758202 , version 1 (28-11-2012)

Identifiants

  • HAL Id : hal-00758202 , version 1

Citer

Tim van Erven, Peter D. Grünwald, Mark D. Reid, Robert C. Williamson. Mixability in Statistical Learning. Advances in Neural Information Processing Systems 25 (NIPS 2012), Dec 2012, Lake Tahoe, United States. ⟨hal-00758202⟩
197 Consultations
107 Téléchargements

Partager

Gmail Facebook X LinkedIn More