A Posit8 Decompression Operator for Deep Neural Network Inference - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

A Posit8 Decompression Operator for Deep Neural Network Inference

Résumé

We propose a hardware operator to decompress Posit8 representations with exponent sizes 0, 1, 2, 3 to the IEEE 754 binary 16 (FP16) representation. The motivation is to leverage the tensor units of a manycore processor that already supports FP16.32 matrix multiplyaccumulate operations for deep learning inference. According to our experiments, adding instructions to decompress Posit8 into FP16 numbers would enable to further reduce the footprint of deep neural network parameters with an acceptable loss of accuracy or precision. We present the design of our decompression operator and compare it to lookup-table implementations for the technology node of the targeted processor.
Fichier principal
Vignette du fichier
A_Posit8_Decompression_Operator_for_Neural_Networks_Inference-1.pdf (272.77 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY - Paternité

Dates et versions

hal-04240741 , version 1 (13-10-2023)

Licence

Paternité

Identifiants

Citer

Orégane Desrentes, Diana Resmerita, Benoît Dupont de Dinechin. A Posit8 Decompression Operator for Deep Neural Network Inference. CoNGA 2022 - Third International Conference on Next Generation Arithmetic, Mar 2022, Singapore, France. pp.14-30, ⟨10.1007/978-3-031-09779-9_2⟩. ⟨hal-04240741⟩
31 Consultations
47 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More