High Throughput Intra-Node MPI Communication with Open-MX - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2009

High Throughput Intra-Node MPI Communication with Open-MX

Résumé

The increasing number of cores per node in high-performance computing requires an efficient intra-node MPI communication subsystem. Most existing MPI implementations rely on two copies across a shared memory-mapped file. Open-MX offers a single-copy mechanism that is tightly integrated in its regular communication stack, making it transparently available to the MX backend of many MPI layers. We describe this implementation and its offloaded copy backend using I/OAT hardware. Memory pinning requirements are then discussed, and overlapped pinning is introduced to enable the start of Open-MX intra-node data transfer earlier. Performance evaluation shows that this local communication stack performs better than MPICH2 and Open~MPI for large messages, reaching up to 70\,\% better throughput in micro-benchmarks when using I/OAT copy offload. Thanks to a single-copy being involved, the Open-MX intra-node communication throughput also does not heavily depend on cache sharing between processing cores, making these performance improvements easier to observe in real applications.
Fichier principal
Vignette du fichier
article.pdf (131.41 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00331209 , version 1 (15-10-2008)

Identifiants

Citer

Brice Goglin. High Throughput Intra-Node MPI Communication with Open-MX. 17th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP2009), Feb 2009, Weimar, Germany. ⟨10.1109/PDP.2009.20⟩. ⟨inria-00331209⟩
359 Consultations
310 Téléchargements

Altmetric

Partager

More