Data-Driven Locality-Aware Batch Scheduling
Résumé
Clusters employ workload schedulers such as the
Slurm Workload Manager to allocate computing jobs onto nodes.
These schedulers usually aim at a good trade-off between in-
creasing resource utilization and user satisfaction (decreasing job
waiting time). However, these schedulers are typically unaware of
jobs sharing large input files, which may happen in data intensive
scenarios. The same input files may end up being loaded several
times, leading to a waste of resources.
We study how to design a data-aware job scheduler that is
able to keep large input files on the computing nodes, without
impacting other memory needs, and can benefit from previously-
loaded files to decrease data transfers in order to reduce the waiting
times of jobs.
We present three schedulers capable of distributing the load
between the computing nodes as well as re-using input files
already loaded in the memory of some node as much as possible.
We perform simulations with single node jobs using traces
of real HPC-cluster usage, to compare them to classical job
schedulers. The results show that keeping data in local memory
between successive jobs and using data-locality information to
schedule jobs improves performance compared to a widely-used
scheduler (FCFS, with and without backfilling): a reduction in
job waiting time (a 7.5% improvement in stretch), and a decrease
in the amount of data transfers (7%).
Fichier principal
Data-Driven Locality-Aware Batch Scheduling.pdf (867.99 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|