AdaStop: adaptive statistical testing for sound comparisons of Deep RL agents
Résumé
Recently, the scientific community has questioned the statistical reproducibility of many empirical results, especially in the field of machine learning.
To contribute to the resolution of this reproducibility crisis, we propose a theoretically sound methodology for comparing the performance of a set of algorithms. We exemplify our methodology in Deep Reinforcement Learning (Deep RL). The performance of one execution of a Deep RL algorithm is a random variable. Therefore, several independent executions are needed to evaluate its performance.
When comparing algorithms with random performance, a major question concerns the number of executions to perform to ensure that the result of the comparison is theoretically sound. Researchers in Deep RL often use less than 5 independent executions
to compare algorithms: we claim that this is not enough in general. Moreover, when comparing more than 2 algorithms at once,
we have to use a multiple tests procedure to preserve low error guarantees. We introduce \adastop, a new statistical test based on multiple group sequential tests.
When used to compare algorithms, \adastop adapts the number of executions to stop as early as possible while ensuring that enough information has been collected to distinguish algorithms that have different score distributions. We prove theoretically that \adastop has a low probability of making a (family-wise) error. We illustrate the effectiveness of \adastop in various use-cases, including toy examples and Deep RL algorithms on challenging Mujoco environments.
\adastop is the first statistical test fitted to this sort of comparisons: it is both a significant contribution to statistics, and an important contribution to computational studies performed in reinforcement learning and in other domains.
Domaines
Machine Learning [stat.ML]Origine | Fichiers produits par l'(les) auteur(s) |
---|