Floating-point LLL Revisited
Résumé
The Lenstra-Lenstra-Lovasz lattice basis reduction algorithm (LLL or L^3) is a very popular tool in public-key cryptanalysis and in many other fields. Given an integer d-dimensional lattice basis with n-dimensional vectors of norm less than B, L^3 outputs a so-called L^3-reduced basis in polynomial time $O(d^5 n \log^3 B)$, using arithmetic operations on integers of bit-length $O(d \log B)$. This worst-case complexity is problematic for lattices arising in cryptanalysis where $d$ or/and $\log B$ are often large. As a result, the original L^3 is almost never used in practice. Instead, one applies floating-point variants of L^3, where the long-integer arithmetic required by Gram-Schmidt orthogonalisation (central in L^3) is replaced by floating-point arithmetic. Unfortunately, this is known to be unstable in the worst-case: the usual floating-point L^3 is not even guaranteed to terminate, and the output basis may not be L^3-reduced at all. In this article, we introduce the L^2 algorithm, a new and natural floating-point variant of L^3 which provably outputs L^3-reduced bases in polynomial time $O(d^4 n (d+\log B) \log B)$. This is the first L^3 algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to $\log B$, like the well-known Euclidean and Gaussian algorithms, which it generalizes.