High Level Transforms for SIMD and Low-Level Computer Vision Algorithms
Abstract
This paper presents a review of algorithmic transforms called High Level Transforms for IBM, Intel and ARM SIMD multi-core pro-cessors to accelerate the implementation of low level image pro-cessing algorithms. We show that these optimizations provide a significant acceleration. A first evaluation of 512-bit SIMD Xeon-Phi is also presented. We focus on the point that the combination of optimizations leading to the best execution time cannot be pre-dicted, and thus, systematic benchmarking is mandatory. Once the best configuration is found for each architecture, a comparison of these performances is presented. The Harris points detection opera-tor is selected as being representative of low level image processing and computer vision algorithms. Being composed of five convolu-tions, it is more complex than a simple filter and enables more op-portunities to combine optimizations. The presented work can scale across a wide range of codes using 2D stencils and convolutions.
Domains
Data Structures and Algorithms [cs.DS] Hardware Architecture [cs.AR] Software Engineering [cs.SE] Discrete Mathematics [cs.DM] Robotics [cs.RO] Image Processing [eess.IV] Signal and Image Processing Computer Vision and Pattern Recognition [cs.CV] Automatic Signal and Image processing Computer Arithmetic Distributed, Parallel, and Cluster Computing [cs.DC]
Origin : Files produced by the author(s)
Loading...