Adaptive Sampling Under Low Noise Conditions
Abstract
We survey recent results on efficient margin-based algorithms for adaptive sampling in binary classification tasks. Using the so-called Mammen-Tsybakov low noise condition to parametrize the distribution of covariates, and assuming linear label noise, we state bounds on the convergence rate of the adaptive sampler to the Bayes risk. These bounds show that, excluding logarithmic factors, the average risk converges to the Bayes risk at rate $N^{-(1+a)(2+a)/2(3+a)}$ where N denotes the number of queried labels, and a is the nonnegative exponent in the low noise condition. For all $a > \sqrt3-1$ this convergence rate is asymptotically faster than the rate $N^{-(1+a)/(2+a)}$ achieved by the fully supervised version of the base adaptive sampler, which queries all labels. Moreover, for a growing to infinity (hard margin condition) the gap between the semi- and fully-supervised rates becomes exponential.
Domains
Statistics [math.ST]
Origin : Files produced by the author(s)
Loading...