Nearest neighbor classification in infinite dimension
Abstract
Let $X$ be a random element in a metric space $(\calF,d)$, and let $Y$ be a random variable with value $0$ or $1$. $Y$ is called the class, or the label, of $X$. Assume $n$ i.i.d. copies $(X_i,Y_i)_1\leqi\leqn$. The problem of classification is to predict the label of a new random element $X$. The $k$-nearest neighbor classifier consists in the simple following rule : look at the $k$ nearest neighbors of $X$ and choose $0$ or $1$ for its label according to the majority vote. If $(\calF,d)=(R^d,||.||)$, Stone has proved in 1977 the universal consistency of this classifier : its probability of error converges to the Bayes error, whatever the distribution of $(X,Y)$. We show in this paper that this result is no more valid in general metric spaces. However, if $(\calF,d)$ is separable and if a regularity condition is assumed, then the $k$-nearest neighbor classifier is weakly consistent.