The Entropy of a Distributed Computing Schedule - the Example of Population Protocols
Abstract
A distributed computing system can be viewed as the result of the interplay between a distributed algorithm specifying the effects of a local event (e.g. reception of a message), and an adversary choosing the interleaving (schedule) of these events in the execution. For some problems, assuming that the adversary selects the schedule according to some probability distribution greatly helps to devise (almost) correct solutions. But how much randomness is really necessary? To what extent does a problem admit implementations that are robust against a " not so random " schedule? This paper takes a first step in addressing this question by borrowing the concept of T-randomness, 0 ≤ T ≤ 1, from algorithmic information theory. Roughly speaking, the value T fixes the entropy rate of the considered schedules. For instance, the case T = 1 corresponds, in a specific sense, to schedules in which events are independent and sampled uniformly (perfect randomness). The holy grail question can then be precisely stated as determining the optimal entropy rate to solve a given problem. We first show that perfect randomness is never required. Precisely, if a finite-state algorithm solves a problem with 1-randomness, then this algorithm still solves the same problem with T-randomness for some T < 1. Second, we illustrate how to compute bounds on the optimal entropy rate of a specific problem, namely the leader election problem.
Origin : Files produced by the author(s)
Loading...