A Simple Unifying Theory of Multi-Class Support Vector Machines
Abstract
Vapnik's statistical learning theory has mainly been developed for two types of problems: pattern recognition (computation of dichotomies) and regression (estimation of real-valued functions). Only in recent years has multi-class discriminant analysis been studied independently. Extending several standard results, among which a famous theorem by Bartlett, we have derived distribution-free uniform strong laws of large numbers devoted to multi-class large margin discriminant models. This technical report deals with the computation of the capacity measures involved in these bounds on the expected risk. Straightforward extensions of results regarding large margin classifiers highlight the central role played by a new generalized VC dimension, which can be seen either as an extension of the fat-shattering dimension to the multivariate case, or as a scale-sensitive version of the graph dimension. The theorems derived are applied to the architecture shared by all the multi-class SVMs proposed so far, which provides us with a simple theoretical framework to study them, compare their performance and design new machines.