Robust supervised classification and feature selection using a primal-dual method
Résumé
This paper deals with supervised classification and feature selection in high dimensional space. A classical approach is to project data on a low dimensional space and classify by minimizing an appropriate quadratic cost. A strict control on sparsity is moreover obtained by adding an 1 constraint, here on the matrix of weights used for projecting the data. Tuning the sparsity bound results in selecting the relevant features for supervised classification. However, an issue is that using a quadratic cost (a squared 2 norm, in practice) for the data term is not robust to outliers. In this paper, we cope with this problem by using an 1 norm both for the constraint and for the loss function. In this case, the criterion is convex but not gradient Lipschitz anymore. Another second issue is that we optimize simultaneously the projection matrix and the centers used for classification. To do so, notwithstanding the lack of regularity, we provide a novel tailored constrained primal-dual method to compute jointly both variables with convergence proofs. We demonstrate the effectiveness of our method on three datasets (one synthetic, two from biological data), and provide a comparison between 1 and 2 costs.
Domaines
Apprentissage [cs.LG]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...