An optimal control framework for adaptive neural ODEs
Résumé
In recent years, the notion of neural ODEs has connected deep learning with the field of ODEs and optimal control. In this setting, neural networks are defined as solutions of a given ODE which is solved with
a certain time discretization. The learning task consists in finding the ODE parameters as the optimal values of a sampled loss minimization problem. In the limit of infinite time steps, and data samples, we obtain a notion of continuous formulation of the problem. The practical implementation involves two discretization errors: a sampling error, and a time-discretization error. In this work, we develop a general optimal control framework to analyse the interplay between the above two errors. We prove that to approximate the solution of the fully continuous problem at a certain accuracy, we not only need a minimal number of training samples, but we also need to solve the control problem on the sampled loss function with some minimal accuracy. The theoretical analysis allows us to develop rigorous adaptive schemes in time and sampling, and give rise to a notion of adaptive neural ODEs. The performance of the approach is illustrated in several numerical examples.
Origine | Fichiers produits par l'(les) auteur(s) |
---|