Combining Weight Approximation, Sharing and Retraining for Neural Network Model Compression - Inria - Institut national de recherche en sciences et technologies du numérique
Article Dans Une Revue ACM Transactions on Embedded Computing Systems (TECS) Année : 2024

Combining Weight Approximation, Sharing and Retraining for Neural Network Model Compression

Résumé

Neural network model compression is very important to achieve model deployment based on the memory and storage available in different computing systems. Generally, the continuous drive for higher accuracy in these models increases their size and complexity, making it challenging to deploy them on resourceconstrained computing environments. This article proposes various algorithms for model compression by exploiting weight characteristics and conducts an in-depth study of their performance. The algorithms involve manipulating exponents and mantissa in the floating-point representations of weights. In addition, we also present a retraining method that uses the proposed algorithms to further reduce the size of pre-trained models. The results presented in this article are mainly on BFloat16 floating-point format. The proposed weight manipulation algorithms save at least 20% of memory on state-of-the-art image classification models with very minor accuracy loss. This loss is bridged using the retraining method that saves at least 30% of memory, with potential memory savings of up to 43%. We compare the performance of the proposed methods against the state-of-the-art model compression techniques in terms of accuracy, memory savings, inference time, and energy.

Fichier principal
Vignette du fichier
Prachi_TECS24.pdf (1.63 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04764621 , version 1 (04-11-2024)

Identifiants

Citer

Prachi Kashikar, Olivier Sentieys, Sharad Sinha. Combining Weight Approximation, Sharing and Retraining for Neural Network Model Compression. ACM Transactions on Embedded Computing Systems (TECS), 2024, 23, pp.1 - 23. ⟨10.1145/3687466⟩. ⟨hal-04764621⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More