Accelerating Large-Scale Deep Convolutional Neural Networks on Multi-core Vector Accelerators - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2021

Accelerating Large-Scale Deep Convolutional Neural Networks on Multi-core Vector Accelerators

Zhong Liu
  • Fonction : Auteur
  • PersonId : 1161085
Sheng Ma
  • Fonction : Auteur
  • PersonId : 1027969
Cheng Li
  • Fonction : Auteur
  • PersonId : 1161086
Haiyan Chen
  • Fonction : Auteur
  • PersonId : 1161087

Résumé

This paper proposes an efficient algorithm mapping method for accelerating deep convolutional neural networks, which includes: (1) Proposing an efficient transformation method, which converts CNN’s convolutional layer and fully connected layer computations into efficient large-scale matrix multiplication computations, and converts pooling layer computations into efficient matrix row computations; (2) Designing a set of general and efficient vectorization method for convolutional layer, fully connected layer and pooling layer on the vector accelerator. The experimental results on the accelerator show that the average computing efficiency of convolution layer and full connected layer of AlexNet, VGG-19, GoogleNet and ResNet-50 are 93.3% and 93.4% respectively, and the average data access efficiency of pooling layer is 70%.
Fichier principal
Vignette du fichier
511910_1_En_6_Chapter.pdf (826.57 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03768763 , version 1 (04-09-2022)

Licence

Identifiants

Citer

Zhong Liu, Sheng Ma, Cheng Li, Haiyan Chen. Accelerating Large-Scale Deep Convolutional Neural Networks on Multi-core Vector Accelerators. 17th IFIP International Conference on Network and Parallel Computing (NPC), Sep 2020, Zhengzhou, China. pp.68-79, ⟨10.1007/978-3-030-79478-1_6⟩. ⟨hal-03768763⟩
21 Consultations
25 Téléchargements

Altmetric

Partager

More