Error bounds on complex floating-point multiplication with an FMA
Résumé
The accuracy analysis of complex floating-point multiplication done by Brent, Percival, and Zimmermann [{\it Math.~ Comp.}, 76:1469--1481, 2007] is extended to the case where a fused multiply-add (FMA) operation is available. Considering floating-point arithmetic with rounding to nearest and unit roundoff $u$, we show that their bound $\sqrt 5 \, u$ on the normwise relative error $|\hat z/z-1|$ of a complex product $z$ can be decreased further to $2u$ when using the FMA in the most naive way. Furthermore, we prove that the term $2u$ is asymptotically optimal not only for this naive FMA-based algorithm, but also for two other algorithms, which use the FMA operation as an efficient way of implementing rounding error compensation. Thus, although highly accurate in the componentwise sense, these two compensated algorithms bring no improvement to the normwise accuracy $2u$ already achieved using the FMA naively. Asymptotic optimality is established for each algorithm thanks to the explicit construction of floating-point inputs for which we prove that the normwise relative error then generated satisfies $|\hat z/z-1| \to 2u$ as $u\to 0$. All our results hold for IEEE floating-point arithmetic, with radix $\beta \ge 2$, precision $p \ge 2$, and rounding to nearest; it is only assumed that underflows and overflows do not occur and, when bounding errors from below, that $\beta^{p-1} \ge 12$.
Domaines
Arithmétique des ordinateursOrigine | Fichiers produits par l'(les) auteur(s) |
---|