The Next 700 Accelerated Layers
Résumé
Deep learning frameworks automate the deployment, distribution, synchronization, memory allocation, andhardware acceleration of models represented as graphs of computational operators. These operators wraphigh-performance libraries such as cuDNN or NNPACK. When the computation does not match any prede-fined library call, custom operators must be implemented, often at high engineering cost and performancepenalty, limiting the pace of innovation. To address this productivity gap, we propose and evaluate: (1) adomain-specific language with a tensor notation close to the mathematics of deep learning; (2) a Just-In-Time optimizing compiler based on the polyhedral framework; (3) carefully coordinated linear optimizationand evolutionary algorithms to synthesize high-performance CUDA kernels; (4) the transparent integrationof our flow into PyTorch and Caffe2, providing the fully automatic synthesis of high-performance GPU ker-nels from simple tensor algebra. The performance is comparable to, and often exceeds the performance of,highly tuned libraries.