Optimizing Performance and Energy Across Problem Sizes Through a Search Space Exploration and Machine Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Journal of Parallel and Distributed Computing Année : 2023

Optimizing Performance and Energy Across Problem Sizes Through a Search Space Exploration and Machine Learning

Lana Scravaglieri
  • Fonction : Auteur
  • PersonId : 1173338
Mihail Popov
  • Fonction : Auteur
  • PersonId : 1129906
Laércio Lima Pilla
Amina Guermouche
  • Fonction : Auteur
  • PersonId : 170800
  • IdHAL : aguermouche
Emmanuelle Saillard

Résumé

HPC systems expose configuration options that help users optimize their applications' execution. Questions related to the best thread and data mapping, number of threads, or cache prefetching have been posed for different applications, yet they have been mostly limited to a single optimization objective (e.g., performance) and a fixed application problem size. Unfortunately, optimization strategies that work well in one scenario may generalize poorly when applied in new contexts. In this work, we investigate the impact of configuration options and different problem sizes over both performance and energy. Through a search space exploration, we have found that well-adapted NUMA-related options and cache prefetchers provide significantly more gains for energy (5.9×) than performance (1.85×) over a standard baseline configuration. Moreover, reusing optimization strategies from performance to energy only provides 40% of the gains found when natively optimizing for energy, while transferring strategies across problem sizes is limited to about 70% of the original gains. In order to fill this gap and to avoid exploring the whole search space in multiple scenarios for each new application, we have proposed a new Machine Learning framework. Taking information from one problem size enables us to predict the best configurations for other sizes. Overall, our Machine Learning models achieve 88% of the native gains when cross-predicting across performance and energy, and 85% when predicting across problem sizes.
Fichier principal
Vignette du fichier
input_paper.pdf (673.66 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03810305 , version 1 (11-10-2022)

Licence

Paternité

Identifiants

Citer

Lana Scravaglieri, Mihail Popov, Laércio Lima Pilla, Amina Guermouche, Olivier Aumage, et al.. Optimizing Performance and Energy Across Problem Sizes Through a Search Space Exploration and Machine Learning. Journal of Parallel and Distributed Computing, 2023, 180, pp.104720. ⟨10.1016/j.jpdc.2023.104720⟩. ⟨hal-03810305⟩

Collections

CNRS INRIA INRIA2
129 Consultations
204 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More