%0 Unpublished work %T On the symmetries in the dynamics of wide two-layer neural networks %+ Laboratoire de Mathématiques d'Orsay (LMO) %+ Statistique mathématique et apprentissage (CELESTE) %+ Ecole Polytechnique Fédérale de Lausanne (EPFL) %A Hajjar, Karl %A Chizat, Lenaic %8 2022-10-30 %D 2022 %Z 2211.08771 %K Neural NetworksNN %K Infinite-width limit %K Gradient Methods %Z Computer Science [cs]/Machine Learning [cs.LG] %Z Statistics [stat]/Machine Learning [stat.ML]Preprints, Working Papers, ... %X We consider the idealized setting of gradient flow on the population risk for infinitely wide two-layer ReLU neural networks (without bias), and study the effect of symmetries on the learned parameters and predictors. We first describe a general class of symmetries which, when satisfied by the target function $f^*$ and the input distribution, are preserved by the dynamics. We then study more specific cases. When $f^*$ is odd, we show that the dynamics of the predictor reduces to that of a (non-linearly parameterized) linear predictor, and its exponential convergence can be guaranteed. When $f^*$ has a low-dimensional structure, we prove that the gradient flow PDE reduces to a lower-dimensional PDE. Furthermore, we present informal and numerical arguments that suggest that the input neurons align with the lower-dimensional structure of the problem. %G English %2 https://hal.science/hal-03829400v2/document %2 https://hal.science/hal-03829400v2/file/learning_features.pdf %L hal-03829400 %U https://hal.science/hal-03829400