Multi-fidelity modeling using DGPs: Improvements and a generalization to varying input space dimensions
Résumé
Multi-fidelity approaches improve the inference of a high-fidelity model which is constructed using a small set of accurate observations, by taking advantage of its correlations with a low-fidelity model built using a larger set of approximated data. Most existing multi-fidelity methods consider the inputs of the low and high fidelity models defined identically over the same input space. However, it happens that the low fidelity model variables are defined over a different space than the variables of the high fidelity model due to different modeling approaches i.e. input spaces with different dimensionality and different nature of the variables. Recently, Deep Gaussian Processes have been used to exhibit the correlations between the low and high fidelity models. In this paper, Deep Gaussian Processes for multi-fidelity (MF-DGP) are extended to the case where the input spaces of the low and high fidelity models are different. Moreover, the learning capacity of MF-DGP is improved by proposing an optimization approach for the inducing inputs and by using natural gradients for the variational distributions of the inducing variables which also allows time reduction in the training.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...