Orthogonalization schemes in tensor train format
Résumé
In the framework of tensor spaces, we consider orthogonalization kernels to generate an orthogonal basis of a
tensor subspace from a set of linearly independent tensors. In particular, we investigate experimentally the
loss of orthogonality of six orthogonalization methods, namely Classical and Modified Gram-Schmidt with (CGS2, MGS2) and without (CGS, MGS) re-orthogonalization, the Gram approach, and the Householder transformation. To tackle the curse of dimensionality, we represent tensor with low-rank approximation using the Tensor Train (TT) formalism, and we introduce recompression steps in the standard algorithm outline through the TT-rounding method at a prescribed precision. After describing the algorithm structure and properties, we illustrate experimentally that the theoretical bounds for the loss of orthogonality in the classical matrix computation round-off analysis results are maintained, with the unit round-off replaced by the TT-rounding precision. The computational analysis for each orthogonalization kernel in terms of the memory requirement and the computational complexity measured as a function of the number of TT-rounding, which happens to be the computational most expensive operation, completes the
study.
Origine : Fichiers produits par l'(les) auteur(s)