Benchmarking large-scale continuous optimizers: the bbob-largescale testbed, a COCO software guide and beyond
Résumé
Benchmarking of optimization solvers is an important and compulsory task for performance assessment that in turn can help in improving the design of algorithms. It is a repetitive and tedious task. Yet, this task has been greatly automatized in the past ten years with the development of the Comparing Continuous Optimizers platform (COCO). In this context, this paper presents a new testbed, called bbob-largescale, that contains functions ranging from dimension 20 to 640, compatible with and extending the well-known single-objective noiseless bbob test suite to larger dimensions. The test suite contains 24 single-objective functions in continuous domain, built to model well-known difficulties in continuous optimization and to test the scaling behavior of algorithms. To reduce the computational demand of the orthogonal search space transformations that appear in the bbob test suite, while retaining some desired properties, we use permuted block diagonal orthogonal matrices. The paper discusses implementation technicalities and presents a guide for using the test suite within the COCO platform and for interpreting the postprocessed output. The source code of the new test suite is available on GitHub as part of the open source COCO benchmarking platform.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...