Some recovery conditions for basis learning by L1-minimization
Résumé
Many recent works have shown that if a given signal admits a sufficiently sparse representation in a given dictionary, then this representation is recovered by several standard opti- mization algorithms, in particular the convex L1 minimization approach. Here we investigate the related problem of infering the dictionary from training data, with an approach where L1- minimization is used as a criterion to select a dictionary. We restrict our analysis to basis learning and identify necessary / sufficient / necessary and sufficient conditions on ideal (not necessarily very sparse) coefficients of the training data in an ideal basis to guarantee that the ideal basis is a strict local optimum of the L1-minimization criterion among (not necessarily orthogonal) bases of normalized vectors. We illustrate these conditions on deterministic as well as toy random models in dimension two and highlight the main challenges that remain open by this preliminary theoretical results.
Fichier principal
2008_ISCCSP_GribonvalSchnass_Recovery_submitted.pdf (340.11 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...