Stability of Unfolded Forward-Backward to Perturbations in Observed Data
Résumé
We consider a neural network architecture to solve inverse problems, which is built by unfolding a forwardbackward algorithm. This algorithm is based on the minimization of an objective function which corresponds to a penalized least squares problem. In this context, ensuring stability is consistent with inverse problem theory since it guarantees both the continuity of the inversion method and its insensitivity to small noise. The latter is a critical property as deep neural networks have been shown to be vulnerable to adversarial perturbations. The main novelty of our work is to analyze the robustness of this inversion method with respect to a perturbation of the bias parameter of the network. In our architecture, the bias accounts for the observed data in the inverse problem. The analysis is performed by using tools of fixed point theory. Our theoretical results are illustrated by numerical simulations on a problem of signal restoration.
Origine | Fichiers produits par l'(les) auteur(s) |
---|