Periodic I/O scheduling for super-computers
Résumé
With the ever-growing need of data in HPC applications, the congestion at the
I/O level becomes critical in super-computers. Architectural enhancement such as
burst-buffers and pre-fetching are added to machines, but are not sufficient to
prevent congestion. Recent online I/O scheduling strategies have been put in
place, but they add an additional congestion point and overheads in the
computation of applications.
In this work, we show how to take advantage of the periodic nature of HPC
applications in order to develop efficient periodic scheduling strategies
for their I/O transfers. Our strategy computes once during the job scheduling phase a pattern where it
defines the I/O behavior for each application, after which the applications run
independently, transferring their I/O at the specified times. Our strategy limits
the amount of I/O congestion at the I/O node level and can be easily integrated
into current job schedulers. We validate this model through extensive simulations
and experiments by comparing it to state-of-the-art online solutions, showing that
not only our scheduler has the advantage of being de-centralized and thus overcoming the
overhead of online schedulers, but also that it performs better than these
solutions, improving the application dilation up to 13% and the maximum
system efficiency up to 18%.
Dans cet article, nous nous intéressons à des techniques de gestion
d'entrées-sorties dans les super-ordinateurs. La nouveauté de ce travail est
la prise en compte de certaintes caractéristiques et arguments structurels sur
les applications haute performance, leur périodicité, dans la conception de nos
algorithmes.
Nous nous comparons à des solutions récentes et montrons un gain en efficacité
système atteignant 18% et en dilation atteignant 13%.
Nous montrons comment facilement intégrer ces solutions sur des
super-ordinateurs.
Origine | Fichiers produits par l'(les) auteur(s) |
---|