Comparing the (1+1)-CMA-ES with a Mirrored (1+2)-CMA-ES with Sequential Selection on the Noiseless BBOB-2010 Testbed
Résumé
In this paper, we compare the (1+1)-CMA-ES to the (1+2$_m^s$)-CMA-ES, a recently introduced quasi-random (1+2)-CMA-ES that uses mirroring as derandomization technique as well as a sequential selection. Both algorithms were tested using independent restarts till a total number of function evaluations of $10^{4} D$ was reached, where $D$ is the dimension of the search space. On the non-separable ellipsoid function in dimension 10, 20 and 40, the performances of the (1+2$_m^s$)-CMA-ES are better by 17% than the best performance among algorithms tested during BBOB-2009 (for target values of $10^{-5}$ and $10^{-7}$). Moreover, the comparison shows that the (1+2$_m^s$)-CMA-ES variant improves the performance of the (1+1)-CMA-ES by about 20% on the ellipsoid, the discus, and the sum of different powers functions and by 12% on the sphere function. Besides, we never observe statistically significant results where the (1+2$_m^s$)-CMA-ES is worse than the (1+1)-CMA-ES.
Domaines
Réseau de neurones [cs.NE]Origine | Fichiers produits par l'(les) auteur(s) |
---|