Dynamic Speed Scaling Minimizing Expected Energy Consumption for Real-Time Tasks
Sélection en-ligne de la vitesse minimisant l'énergie dans les systèmes temps-réels
Abstract
This paper proposes a Markov Decision Process (MDP) approach to compute the
optimal on-line speed scaling policy to minimize the energy consumption of a processor executing
a finite or infinite set of jobs with real-time constraints. The policy is computed off-line but used
on-line. We provide several qualitative properties of the optimal policy: monotonicity with respect
to the jobs parameters, comparison with on-line deterministic algorithms. Numerical experiments
show that our proposition performs well when compared with off-line optimal solutions and outperforms
on-line solutions oblivious to statistical information on the jobs. Several extensions are
also explained when speed changes as well as context switch costs are taken into account. Nonconvex
power functions are also taken into account to model leakage. Finally, state space reduction
using a coarser discretization is presented to deal with the curse of dimensionality of the MDP.
Cet article propose d’utiliser la technique des processus de décision markovien
(PDM) pour calculer la politique optimale en-ligne de choix de vitesses afin de minimiser l’énergie
consommée par un processeur exécutant un ensemble de tâches avec des contraintes temps-réel.
La politique est calculée avant l’exécution du système temps réel (hors ligne), mais utilisée
en ligne. Cette méthode est efficace et proche de la solution optimale hors-ligne et elle est plus
performante que les solutions en-ligne qui ne prennent pas en compte les informations statistiques
sur les tâches futures.
Origin | Files produced by the author(s) |
---|
Loading...