Skewed Branch Predictors
Résumé
As modern microprocessors employ deeper pipelines and issue multiple instructions per cycle, they are becoming increasingly dependent on good branch prediction. During the past five years, researchers have shown that branch-prediction accuracy can be improved by basing predictions on the outcome of previous branches. Many such methods have been proposed, but they all share a common characteristic: they require hardware resources to implement the tables and state machines that record the branch-history information. Because hardware resources are invariably limited, it is not possible to hold all relevant branch history for all active branches at the same time, especially for larger workloads consisting of multiple processes and operating-system code. The problem that results, commonly referred to as aliasing in the branch-predictor tables, is in many ways similar to the misses that occur in finite-sized hardware caches. The first contribution of this paper is to propose a classification for three different types of branch aliasing (compulsory, capacity and conflict). We argue that although previous research has resulted in reductions in compulsory and capacity aliasing, little has been done to reduce conflict aliasing. Drawing on established work in caches, our second contribution is to propose the {\em skewed branch predictor}, a multi-bank, tag-less structure, designed specifically to reduce the impact of conflict aliasing. Through both analytical and simulation models, we show that the skewed branch predictor removes a substantial portion of conflict aliasing by introducing redundancy to the branch-predictor tables. Although this redundancy increases capacity aliasing compared to a standard one-bank structure of comparable size, our simulations show that the reduction in conflict aliasing overcomes this effect to yield a net gain in prediction accuracy.