Compressed Least-Squares Regression
Résumé
We consider the problem of learning, from K input data, a regression function in a function space of high dimension N using projections onto a random subspace of lower dimension M. From any linear approximation algorithm using empirical risk minimization (possibly penalized), we provide bounds on the excess risk of the estimate computed in the projected subspace (compressed domain) in terms of the excess risk of the estimate built in the high-dimensional space (initial domain). We apply the analysis to the ordinary Least-Squares regression and show that by choosing M=O(\sqrt{K}), the estimation error (for the quadratic loss) of the ``Compressed Least Squares Regression'' is O(1/\sqrt{K}) up to logarithmic factors. We also discuss the numerical complexity of several algorithms (both in initial and compressed domains) as a function of N, K, and M.
Domaines
Apprentissage [cs.LG]Origine | Fichiers produits par l'(les) auteur(s) |
---|