Some Rates of Convergence for the Selected Lasso Estimator
Abstract
We consider the estimation of a function in some ordered finite or infinite dictionary. We focus on the selected Lasso estimator introduced by Massart and Meynet (2011) as an adaptation of the Lasso suited to deal with infinite dictionaries. We use the oracle inequality established by Massart and Meynet (2011) to derive rates of convergence of this estimator on a wide range of function classes described by interpolation spaces such as in Barron et al. (2008). The results highlight that the selected Lasso estimator is adaptive to the smoothness of the function to be estimated, contrary to the classical Lasso or the greedy algorithm considered by Barron et al. (2008). Moreover, we prove that the rates of convergence of this estimator are optimal in the orthonormal case.