Exact Dot Product Accumulate Operators for 8-bit Floating-Point Deep Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Exact Dot Product Accumulate Operators for 8-bit Floating-Point Deep Learning

Julien Le Maire
  • Fonction : Auteur
  • PersonId : 1294238

Résumé

Low bit-width floating-point formats appear as the main alternative to 8-bit integers for quantized deep learning applications. We propose an architecture for exact dot product accumulate operators and compare its implementation costs for different 8-bit floating-point formats: FP8 with five exponent bits and two fraction bits (E5M2), FP8 with four exponent bits and three fraction bits (E4M3), and Posit8 formats with different exponent sizes. The front-ends of these exact dot product accumulate operators take 8-bit multiplicands, expand their fullprecision products to fixed-point, and sum terms into wide accumulators. The back-ends of these operators round down the wide accumulators contents first to FP32 and then to one of the 8-bit floating-point formats. We synthesize the proposed 8-bit floating-point exact dot product accumulate operators targeting the TSMC 16FFC node and compare their area and power to a baseline of operators with FP16 and INT8 multiplicands.

Mots clés

Fichier principal
Vignette du fichier
DSD2023-5.pdf (297.26 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY - Paternité

Dates et versions

hal-04240816 , version 1 (13-10-2023)

Licence

Paternité

Identifiants

  • HAL Id : hal-04240816 , version 1

Citer

Orégane Desrentes, Benoît Dupont de Dinechin, Julien Le Maire. Exact Dot Product Accumulate Operators for 8-bit Floating-Point Deep Learning. DSD/SEAA 2023 - 26th Euromicro Conference Series on Digital System Design, Sep 2023, Durres, Albania. ⟨hal-04240816⟩
76 Consultations
267 Téléchargements

Partager

Gmail Facebook X LinkedIn More