Computing minimal interpolation bases
Résumé
We consider the problem of computing univariate polynomial matrices over a
field that represent minimal solution bases for a general interpolation
problem, some forms of which are the vector M-Pad\'e approximation problem in
[Van Barel and Bultheel, Numerical Algorithms 3, 1992] and the rational
interpolation problem in [Beckermann and Labahn, SIAM J. Matrix Anal. Appl. 22,
2000]. Particular instances of this problem include the bivariate interpolation
steps of Guruswami-Sudan hard-decision and K\"otter-Vardy soft-decision
decodings of Reed-Solomon codes, the multivariate interpolation step of
list-decoding of folded Reed-Solomon codes, and Hermite-Pad\'e approximation.
In the mentioned references, the problem is solved using iterative algorithms
based on recurrence relations. Here, we discuss a fast, divide-and-conquer
version of this recurrence, taking advantage of fast matrix computations over
the scalars and over the polynomials. This new algorithm is deterministic, and
for computing shifted minimal bases of relations between $m$ vectors of size
$\sigma$ it uses $O~( m^{\omega-1} (\sigma + |s|) )$ field operations, where
$\omega$ is the exponent of matrix multiplication, and $|s|$ is the sum of the
entries of the input shift $s$, with $\min(s) = 0$. This complexity bound
improves in particular on earlier algorithms in the case of bivariate
interpolation for soft decoding, while matching fastest existing algorithms for
simultaneous Hermite-Pad\'e approximation.
Origine | Fichiers produits par l'(les) auteur(s) |
---|