Kernel Assisted Collective Intra-node MPI Communication Among Multi-core and Many-core CPUs
Résumé
Shared memory is among the most common approaches to implementing message passing within multi-core nodes. However, current shared memory techniques do not scale with increasing numbers of cores and expanding memory hierarchies -- most notably when handling large data transfers and collective communication. Neglecting the underlying hardware topology, using copy-in/copy-out memory transfer operations, and overloading the memory subsystem using one-to-many types of operations are some of the most common mistakes in today's shared memory implementations. Unfortunately, they all negatively impact the performance and scalability of MPI libraries -- and therefore applications. In this paper, we present several kernel-assisted intra-node collective communication techniques that address these three issues on many-core systems. We also present a new Open MPI collective communication component that uses the KNEM Linux module for direct inter-process memory copying. Our Open MPI component implements several novel strategies to decrease the number of intermediate memory copies and improve data locality in order to diminish both cache pollution and memory pressure. Experimental results show that our KNEM-enabled Open\,MPI collective component can outperform state-of-art MPI libraries (Open\,MPI and MPICH2) on synthetic benchmarks, resulting in a significant improvement for a typical graph application.
Domaines
Système d'exploitation [cs.OS]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...