Filtering participants improves generalization in competitions and benchmarks
Résumé
We address the problem of selecting a winning algorithm in a challenge or benchmark. While evaluations of algorithms carried out by third party organizers eliminate the inventor-evaluator bias, little attention has been paid to the risk of over-fitting the winner's selection by the organizers. In this paper, we carry out an empirical evaluation using the results of several challenges and benchmarks, evidencing this phenomenon. We show that a heuristic commonly used by organizers consisting of pre-filtering participants using a trial run, reduces over-fitting. We formalize this method and derive a semi-empirical formula to determine the optimal number of top k participants to retain from the trial run.
Fichier principal
ESANN__Judging_Competitions__a_Meta_Learning_Perspective.pdf (511.85 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|