A Posit8 Decompression Operator for Deep Neural Network Inference
Résumé
We propose a hardware operator to decompress Posit8 representations with exponent sizes 0, 1, 2, 3 to the IEEE 754 binary 16 (FP16) representation. The motivation is to leverage the tensor units of a manycore processor that already supports FP16.32 matrix multiplyaccumulate operations for deep learning inference. According to our experiments, adding instructions to decompress Posit8 into FP16 numbers would enable to further reduce the footprint of deep neural network parameters with an acceptable loss of accuracy or precision. We present the design of our decompression operator and compare it to lookup-table implementations for the technology node of the targeted processor.
Domaines
Arithmétique des ordinateurs
Fichier principal
A_Posit8_Decompression_Operator_for_Neural_Networks_Inference-1.pdf (272.77 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |