Profile Guided Hybrid Compilation
Compilation hybride guidée pour Profil
Résumé
Heat dissipation limitations caused a paradigm change in how computational capacity of chips are
scaled, from increasing the clock frequency to growing parallelism. In order to explore this
characteristic computer applications must be made parallel, a hard job left to software developers. To
aid in this process many optimizing compilers and frameworks have been developed, such as polyhedral
compilation tools (e.g. PLuTo).
In order to apply a transformation to a code, compilers must prove that this preserves the original
programs semantics. When the transformation selection and validation is done solely by reasoning
over the source-code, this process is called static. With transforming syntax poor code, such as code
containing memory references with multiple possible indirections, static compilers are often unable to
verify applicability and many optimization opportunities are lost. To overcome this lack of information
that can be extracted from the source code dynamic compilers perform the transformation at run-time,
when all variables of the program have assigned values. However their analyses and optimizations are
rather simplistic, since the time used for validation and optimization introduces an overhead to the application
execution time, or perform a constant-time validation, as the observed application behavior might
change, invalidating performed predictions. Hybrid analyses use run-time information and feed it back
into a static compiler to help with transformation selection and validation.
This works advocates for the use of hybrid analyses when optimizing loops, regions where the majority
of programs spend most of their time. It proposes a framework that statically applies a sequence of
complex loop transformations in a speculative manner. Based on memory access expressions it generates
lightweight run-time tests to ensure that data dependencies are not violated by a given transformation.
Using information collected at run-time it discards transformations that would never be used due too constraining
validity tests. At the heart of this technique is a powerful quantifier elimination scheme over
multivariate integer polynomials, which provides a more precise result than any other known tool.
The soundness of the framework is demonstrated against a modified version of the Polybench 4.1
benchmark suite, where all data structures have been linearized. Performing the same transformations
that a polyhedral optimizer would apply over the original programs, our framework generates tests that
correctly validate transformations uses, either by proving correct ones or blocking invalid ones. To further
illustrate the generality of our run-time test generation scheme, we demonstrate the capacity to correctly
generate tests for programs with polynomial memory accesses, caused by packed triangular matrix access
patterns.
N/A
Loading...