Impact of Shutdown Techniques for Energy-Efficient Cloud Data Centers
Résumé
Electricity consumption is a worrying concern in current large-scale systems like datacenters and supercomputers. These infrastructures are often dimensioned according to the workload peak. However, their consumption is not power-proportional: when the workload is low, the consumption is still high. Shutdown techniques have been developed to adapt the number of switched-on servers to the actual workload. However, datacenter operators are reluctant to adopt such approaches because of their potential impact on reactivity and hardware failures, and their energy gain which is often largely misjudged. In this article, we evaluate the potential gain of shutdown techniques by taking into account shutdown and boot up costs in time and energy. This evaluation is made on recent server architectures and future hypothetical energy-aware architectures. We also determine if the knowledge of future is required for saving energy with such techniques. We present simulation results exploiting real traces collected on different infrastructures under various machine configurations with several shutdown policies, with and without workload prediction.
Origine | Fichiers produits par l'(les) auteur(s) |
---|