Log-log Convergence for Noisy Optimization
Résumé
We consider noisy optimization problems, without the as-sumption of variance vanishing in the neighborhood of the optimum. We show mathematically that simple rules with exponential number of re-samplings lead to a log-log convergence rate. In particular, in this case the log of the distance to the optimum is linear on the log of the num-ber of resamplings. As well as with number of resamplings polynomial in the inverse step-size. We show empirically that this convergence rate is obtained also with polynomial number of resamplings. In this polyno-mial resampling setting, using classical evolution strategies and an ad hoc choice of the number of resamplings, we seemingly get the same rate as those obtained with specific Estimation of Distribution Algorithms de-signed for noisy setting. We also experiment non-adaptive polynomial re-samplings. Compared to the state of the art, our results provide (i) proofs of log-log convergence for evolution strategies (which were not covered by existing results) in the case of objective functions with quadratic expec-tations and constant noise, (ii) log-log rates also for objective functions with expectation E[f (x)] = ||x − x * || p , where x * represents the optimum (iii) experiments with different parametrizations than those considered in the proof. These results propose some simple revaluation schemes. This paper extends [1].
Domaines
Optimisation et contrôle [math.OC]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...