Adaptive multi-fidelity optimization with fast learning rates - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year :

Adaptive multi-fidelity optimization with fast learning rates

Michal Valko

Abstract

In multi-fidelity optimization, we have access to biased approximations of varying costs of the target function. In this work, we study the setting of optimizing a locally smooth function with a limited budget Λ, where the learner has to make a trade-off between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions and improving prior results. Finally, we empirically show that our algorithm outperforms prior multi-fidelity optimization methods without the knowledge of problem-dependent parameters.
Fichier principal
Vignette du fichier
fiegel2020adaptive.pdf (1.31 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03288879 , version 1 (16-07-2021)

Identifiers

  • HAL Id : hal-03288879 , version 1

Cite

Côme Fiegel, Victor Gabillon, Michal Valko. Adaptive multi-fidelity optimization with fast learning rates. International Conference on Artificial Intelligence and Statistics, 2020, Palermo, Italy. ⟨hal-03288879⟩
58 View
52 Download

Share

Gmail Facebook Twitter LinkedIn More