FPGA-based CNN Acceleration using Pattern-Aware Pruning
Résumé
While convolutional neural networks (CNNs) have demonstrated exceptional performance in computer vision, opti- mizing FPGA-based CNN accelerators remains a challenge due to resource constraints. This is especially true for sequential designs, which are limited by external memory access. Despite the benefits of sparsity, most existing sparse accelerators are sequential and memory-bound. We introduce an innovative dataflow CNN architecture enriched with structured sparsity through pattern pruning. In our approach, pattern pruning serves as a fine-tuning step, effectively reducing FPGA resource consumption, including memory and logic. Experimental results indicate that our method leads to better latency than other dataflow approaches, while maintaining competitive accuracy compared to state-of-the-art unstructured pruning methods. We demonstrate the versatility of our approach in image classification and super-resolution applications, where we achieve a consistent 30 frames per second across a wide range of image sizes on the Set5 dataset.