Average performance of the sparsest approximation in a dictionary
Résumé
Given data d ∈ RN, we consider its representation u* involving the least number of non-zero elements (denoted by ℓ0(u*)) using a dictionary A (represented by a matrix) under the constraint kAu − dk ≤ τ, for τ > 0 and a norm k.k. This (nonconvex) optimization problem leads to the sparsest approximation of d. We assume that data d are uniformly distributed in θBfd (1) where θ>0 and Bfd (1) is the unit ball for a norm fd. Our main result is to estimate the probability that the data d give rise to a K−sparse solution u*: we prove that P (ℓ0(u*) ≤ K) = CK( τ θ )(N−K) + o(( τ θ )(N−K)), where u* is the sparsest approximation of the data d and CK > 0. The constants CK are an explicit function of k.k, A, fd and K which allows us to analyze the role of these parameters for the obtention of a sparsest K−sparse approximation. Consequently, given fd and θ, we have a tool to build A and k.k in such a way that CK (and hence P (ℓ0(u*) ≤ K)) are as large as possible for K small. In order to obtain the above estimate, we give a precise characterization of the set [\zigma τK] of all data leading to a K−sparse result. The main difficulty is to estimate accurately the Lebesgue measure of the sets {[\zigma τ K] ∩ Bfd (θ)}. We sketch a comparative analysis between our Average Performance in Approximation (APA) methodology and the well known Nonlinear Approximation (NA) which also assess the performance in approximation.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...