Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed

Résumé

The Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-objective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a baseline, we benchmark a pure random search on this bi-objective bbob-biobj test suite of the COCO platform. For each combination of function, dimension n, and instance of the test suite, $10^6 · n$ candidate solutions are sampled uniformly within the sampling box $[−5, 5]^n$ .
Fichier principal
Vignette du fichier
wk0807-auger-RSsingle-authorversion.pdf (4.3 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01435455 , version 1 (14-01-2017)

Identifiants

Citer

Anne Auger, Dimo Brockhoff, Nikolaus Hansen, Dejan Tušar, Tea Tušar, et al.. Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed. GECCO 2016 - Genetic and Evolutionary Computation Conference, Jul 2016, Denver, CO, United States. pp.1217-1223, ⟨10.1145/2908961.2931704⟩. ⟨hal-01435455⟩
613 Consultations
203 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More