Interpretable reduced order modeling using neural autoencoders
Résumé
Solving simulation problems such as turbulent flows for industrial applications requires the discretization of partial differential equations on extremely high dimensional meshes. These high-dimensional discretizations yield complex systems of equations that often require access to supercomputers to be solved. This creates a significant limitation to the use of numerical simulation methods in an industrial context. At the same time, most systems evolve on relatively low dimensional manifolds, meaning that the high dimensional representations used by methods such as Finite Elements are not optimal for the simulation of physical systems. This suggests that computational gain can be achieved by identifying better representation spaces for the solutions of simulation problems.
This issue is directly related to the field of dimensionality reduction, which proposes methods to identify the low dimensional manifolds on which most datasets lie to simplify their analysis. Using linear dimensionality reduction methods such as the Proper Orthogonal Decomposition (POD) method, reduced order models have been constructed to summarise the state of dynamical systems using a small number of features. These methods rely on data to construct a basis of linear modes that optimally represent the system at hand. They have for example been used in combination with regression methods to construct fully data-driven reduced order models. Linear reduction methods also have the advantage of being interpretable and can be used in combination with existing physical equations to construct hybrid models combining both equations derived from first principles and data-driven components.
Unfortunately, these linear reduction methods have also been shown to be limited for the reduction of dynamical systems (\cite{led}). Indeed, while the intrinsic dimension of most systems is relatively low, the manifolds on which they evolve can be strongly nonlinear. Meaning that such low dimensional manifolds are not accurately approximated as linear subspaces, as is the case with linear reduction methods. To address this issue, a wide range of non-linear reduction methods have been applied to dynamical systems. Most notably, neural autoencoders have shown impressive performance for the reduction (and reconstruction) of a system's state.
At the same time, the use of nonlinear reduction methods limits the possibility of constructing theoretically grounded and interpretable models that leverage previously known physical models. Thus, they often require the use of fully data-driven dynamical models such as recurrent neural networks for the approximation of the dynamics of the chosen reduced representation. To address this issue, we propose an approach to learn an interpretable and theoretically grounded dynamical model that evolves the latent representation of a dynamical system in time. The dynamics are learned from data as a time-continuous model constructed around a linear term, completed by a non-Markovian nonlinear closure.
We show in our work that the form of our model can be justified using the theory of partially observed systems through the Mori-Zwanzig formalism. Moreover, we show that for certain systems, the framework is able to transform a high-dimensional partial differential equation into a quasi-linear ordinary differential equation yielding an interpretable, low-dimensional system that can be simulated efficiently and is easily interpreted.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |