An MDP-Based Solution for the Energy Minimization of Non-Clairvoyant Hard Real-Time Systems
Une solution basée sur les MDP pour la minimisation de l'énergie des systèmes temps réel durs non clairvoyants
Résumé
We address the problem of scheduling a possibly infinite sequence of hard real-time jobs on a single-core processor, with the dual goal that (1) all the jobs must finish before their deadline, and (2) the energy consumption must be minimized. The decision variable is the speed of the processor at each time instant, to be chosen from a finite set of available speeds. Our goal is to design a speed policy in charge of deciding, at each time instant, the speed of the processor in function of the job characteristics. We focus more specifically on the non-clairvoyant case, meaning that the actual size of the jobs (the amount of work to be done to complete the job) is unknown when they are released, but its probability distribution is known, and of course, the maximal size is known too. In this context, we propose two new speed policies, the optimal solution of a Markov Decision Process (called MOSP), and a heuristic speed policy called Expected Load (EL), obtained by adapting the classical policy Optimal Available (OA) to jobs with random sizes. Our MOSP algorithm is split in two phases: the first phase is offline — it computes the optimal processor speed for each possible system state — while the second phase is online — it retrieves the speed to apply to the pro- cessor thanks to a table lookup. Compared with the existing speed policies from the literature, MOSP achieves the optimal energy consumption but at the cost of a significant state space size. In contrast, EL achieves an energy consumption that is, on average, close to the optimal one obtained with MOSP, but at almost no cost in terms of state space.
Origine | Fichiers produits par l'(les) auteur(s) |
---|