Anytime Performance Assessment in Blackbox Optimization Benchmarking - Inria - Institut national de recherche en sciences et technologies du numérique
Article Dans Une Revue IEEE Transactions on Evolutionary Computation Année : 2022

Anytime Performance Assessment in Blackbox Optimization Benchmarking

Résumé

We present concepts and recipes for the anytime performance assessment when benchmarking optimization algorithms in a blackbox scenario. We consider runtime-oftentimes measured in number of blackbox evaluations needed to reach a target quality-to be a universally measurable cost for solving a problem. Starting from the graph that depicts the solution quality versus runtime, we argue that runtime is the only performance measure with a generic, meaningful, and quantitative interpretation. Hence, our assessment is solely based on runtime measurements. We discuss proper choices for solution quality indicators in single-and multiobjective optimization, as well as in the presence of noise and constraints. We also discuss the choice of the target values, budget-based targets, and the aggregation of runtimes by using simulated restarts, averages, and empirical cumulative distributions which generalize convergence graphs of single runs. The presented performance assessment is to a large extent implemented in the comparing continuous optimizers (COCO) platform freely available at https://github.com/numbbo/coco.
Fichier principal
Vignette du fichier
coco-perf-assess-paper.pdf (1.05 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03814997 , version 1 (14-10-2022)
hal-03814997 , version 2 (31-01-2023)

Identifiants

Citer

Nikolaus Hansen, Anne Auger, Dimo Brockhoff, Tea Tusar. Anytime Performance Assessment in Blackbox Optimization Benchmarking. IEEE Transactions on Evolutionary Computation, 2022, 26 (6), pp.1293--1305. ⟨10.1109/TEVC.2022.3210897⟩. ⟨hal-03814997v2⟩
100 Consultations
165 Téléchargements

Altmetric

Partager

More