Data-Driven Locality-Aware Batch Scheduling - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2024

Data-Driven Locality-Aware Batch Scheduling

Résumé

Clusters employ workload schedulers such as the Slurm Workload Manager to allocate computing jobs onto nodes. These schedulers usually aim at a good trade-off between in- creasing resource utilization and user satisfaction (decreasing job waiting time). However, these schedulers are typically unaware of jobs sharing large input files, which may happen in data intensive scenarios. The same input files may end up being loaded several times, leading to a waste of resources. We study how to design a data-aware job scheduler that is able to keep large input files on the computing nodes, without impacting other memory needs, and can benefit from previously- loaded files to decrease data transfers in order to reduce the waiting times of jobs. We present three schedulers capable of distributing the load between the computing nodes as well as re-using input files already loaded in the memory of some node as much as possible. We perform simulations with single node jobs using traces of real HPC-cluster usage, to compare them to classical job schedulers. The results show that keeping data in local memory between successive jobs and using data-locality information to schedule jobs improves performance compared to a widely-used scheduler (FCFS, with and without backfilling): a reduction in job waiting time (a 7.5% improvement in stretch), and a decrease in the amount of data transfers (7%).
Fichier principal
Vignette du fichier
Data-Driven Locality-Aware Batch Scheduling.pdf (867.99 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04500281 , version 1 (11-03-2024)

Licence

Identifiants

  • HAL Id : hal-04500281 , version 1

Citer

Maxime Gonthier, Elisabeth Larsson, Loris Marchal, Carl Nettelblad, Samuel Thibault. Data-Driven Locality-Aware Batch Scheduling. APDCM 2024 - 26th Workshop on Advances in Parallel and Distributed Computational Models, 38th IEEE International Parallel and Distributed Processing Symposium, May 2024, San Francisco, United States. ⟨hal-04500281⟩
106 Consultations
127 Téléchargements

Partager

More