Speculative Runtime Parallelization of Loop Nests: Towards Greater Scope and Efficiency
Abstract
Runtime loop optimization and speculative execution are becoming more and more prominent to leverage performance in the current multi-core and many-core era. However, a wider and more efficient use of such techniques is mainly hampered by the prohibitive time overhead induced by centralized data race detection, dynamic code behavior modeling and code generation. Most of the existing Thread Level Speculation (TLS) systems rely on slicing the target loops into chunks, and trying to execute the chunks in parallel with the help of a centralized performance-penalizing verification module that takes care of data races. Due to the lack of a data dependence model, these speculative systems are not capable of doing advanced transformations and, more importantly, the chances of rollback are high. The polytope model is a well known mathematical model to analyze and optimize loop nests. The current state-of-art tools limit the application of the polytope model to static control codes. Thus, none of these tools can handle codes with while loops, indirect memory accesses or pointers. Apollo (Automatic POLyhedral Loop Optimizer) is a framework that goes one step beyond, and applies the polytope model dynamically by using TLS. Apollo can predict, at runtime,
whether the codes are behaving linearly or not, and applies polyhedral transformations on-the-fly. This paper presents a novel system, which extends the capability of Apollo to handle codes whose memory accesses are not necessarily linear. More generally, this approach expands the applicability of the polytope model at runtime to a wider class of codes.