Bounds on the Risk for M-SVMs - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Journal Articles Applied Stochastic Models in Business and Industry Year : 2003

Bounds on the Risk for M-SVMs

Yann Guermeur
  • Function : Author
  • PersonId : 830806
André Elisseeff
  • Function : Author
Dominique Zelus
  • Function : Author

Abstract

Vapnik's statistical learning theory has mainly been developed for two types of problems: pattern recognition (computation of dichotomies) and regression (estimation of real-valued functions). Only in recent years has multi-class discriminant analysis been studied independently. Extending several standard results, among which a famous theorem by Bartlett, we have derived distribution-free uniform strong laws of large numbers devoted to multi-class large margin discriminant models. The capacity measure appearing in the confidence interval, a covering number, has been bounded from above in terms of a new generalized VC dimension. In this paper, the aforementioned theorems are applied to the architecture shared by all the multi-class SVMs proposed so far, which provides us with a simple theoretical framework to study them, compare their performance and design new machines.
Not file

Dates and versions

inria-00099587 , version 1 (26-09-2006)

Identifiers

  • HAL Id : inria-00099587 , version 1

Cite

Yann Guermeur, André Elisseeff, Dominique Zelus. Bounds on the Risk for M-SVMs. Applied Stochastic Models in Business and Industry, 2003. ⟨inria-00099587⟩
44 View
0 Download

Share

Gmail Facebook Twitter LinkedIn More