Scalable sparse tensor decompositions in distributed memory systems - Inria - Institut national de recherche en sciences et technologies du numérique
Reports (Research Report) Year : 2015

Scalable sparse tensor decompositions in distributed memory systems

Décompositions de tenseurs creux dans les systèmes à memoire distribuée

Abstract

We investigate an efficient parallelization of the most common iterative sparse tensor decomposition algorithms on distributed memory systems. A key operation in each iteration of these algorithms is the matricized tensor times Khatri-Rao product (MTTKRP). This operation amounts to element-wise vector multiplication and reduction depending on the sparsity of tensor. We investigate a fine and a coarse-grain task definition for this operation, and propose hypergraph partitioning-based methods for these task definitions to achieve load balance as well as reduce communication requirements. We also design a distributed memory sparse tensor library, HyperTensor, which implements a well-known algorithm for the CANDECOMP-PARAFAC (CP) tensor decomposition using the task definitions and the associated partitioning methods. We use this library to test the proposed implementation of MTTKRP in CP decomposition context, and report scalability results up to 1024 MPI ranks. We demonstrate up to 194 fold speedups using 512 MPI processes on a well-known real world data, and significantly better performance results with respect to a state of the art implementation.
Fichier principal
Vignette du fichier
RR-8722.pdf (1.24 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-01148202 , version 1 (04-05-2015)
hal-01148202 , version 2 (14-12-2015)

Identifiers

  • HAL Id : hal-01148202 , version 1

Cite

Oguz Kaya, Bora Uçar. Scalable sparse tensor decompositions in distributed memory systems. [Research Report] RR-8722, Inria - Research Centre Grenoble – Rhône-Alpes; INRIA. 2015. ⟨hal-01148202v1⟩

Collections

INRIA-RRRT
705 View
1238 Download

Share

More