Approximation spaces of deep neural networks - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Approximation spaces of deep neural networks

Résumé

We study the expressivity of sparsely connected deep networks. Measuring a network's complexity by its number of connections, or its number of neurons, we consider the class of functions which error of best approximation with networks of a given complexity decays at a certain rate. Using classical approximation theory, we show that this class can be endowed with a norm that makes it a nice function space, called approximation space. We establish that the presence of certain skip connections has no impact of the approximation space, and discuss the role of the network's nonlinearity (also known as activation function) on the resulting spaces, as well as the benefits of depth. For the popular ReLU nonlinearity (as well as its powers), we relate the newly identified spaces to classical Besov spaces, which have a long history associated to sparse wavelet decompositions. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.
Fichier principal
Vignette du fichier
Abstract-SMAI2019.pdf (75.45 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02127179 , version 1 (13-05-2019)

Identifiants

  • HAL Id : hal-02127179 , version 1

Citer

Rémi Gribonval, Gitta Kutyniok, Morten Nielsen, Felix Voigtlaender. Approximation spaces of deep neural networks. SMAI 2019 - 9ème Biennale des Mathématiques Appliquées et Industrielles, May 2019, Guidel, France. pp.1. ⟨hal-02127179⟩
170 Consultations
107 Téléchargements

Partager

Gmail Facebook X LinkedIn More