Mixability in Statistical Learning - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2012

Mixability in Statistical Learning

Abstract

Statistical learning and sequential prediction are two different but related for- malisms to study the quality of predictions. Mapping out their relations and trans- ferring ideas is an active area of investigation. We provide another piece of the puzzle by showing that an important concept in sequential prediction, the mixa- bility of a loss, has a natural counterpart in the statistical setting, which we call stochastic mixability. Just as ordinary mixability characterizes fast rates for the worst-case regret in sequential prediction, stochastic mixability characterizes fast rates in statistical learning. We show that, in the special case of log-loss, stochastic mixability reduces to a well-known (but usually unnamed) martingale condition, which is used in existing convergence theorems for minimum description length and Bayesian inference. In the case of 0/1-loss, it reduces to the margin condition of Mammen and Tsybakov, and in the case that the model under consideration contains all possible predictors, it is equivalent to ordinary mixability.
Fichier principal
Vignette du fichier
stochmix.pdf (318.07 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-00758202 , version 1 (28-11-2012)

Identifiers

  • HAL Id : hal-00758202 , version 1

Cite

Tim van Erven, Peter D. Grünwald, Mark D. Reid, Robert C. Williamson. Mixability in Statistical Learning. Advances in Neural Information Processing Systems 25 (NIPS 2012), Dec 2012, Lake Tahoe, United States. ⟨hal-00758202⟩
196 View
106 Download

Share

Gmail Facebook X LinkedIn More