Pipelined Model Parallelism: Complexity Results and Memory Considerations - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2021

Pipelined Model Parallelism: Complexity Results and Memory Considerations

Résumé

The training phase in Deep Neural Networks has become an important source of computing resource usage and the resulting volume of computation makes it crucial to perform efficiently on parallel architectures. Data parallelism is the most widely used method, but it requires to replicate the network weights on all processors, and to perform collective communications of the network weights. In this context, model parallelism is an attractive alternative, in which the different layers of the network are distributed over the computing processors. Indeed, it is expected to better distribute weights (to cope with memory problems) and it eliminates the need for large collective communications since only forward activations are communicated. However, to be efficient, it must be combined with a pipelined approach, which in turn induces new memory costs. In this paper, our goal is to formalize pipelined model parallelism as a scheduling problem, to establish its complexity, and to analyze the consequences of the assumptions that are typically performed in practical solutions such as Pipedream.
Fichier principal
Vignette du fichier
paperRR.pdf (528.78 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02968802 , version 1 (16-10-2020)
hal-02968802 , version 2 (16-10-2020)
hal-02968802 , version 3 (18-02-2021)

Identifiants

  • HAL Id : hal-02968802 , version 3

Citer

Olivier Beaumont, Lionel Eyraud-Dubois, Alena Shilova. Pipelined Model Parallelism: Complexity Results and Memory Considerations. Europar 2021, Aug 2021, Lisbon, Portugal. ⟨hal-02968802v3⟩
378 Consultations
494 Téléchargements

Partager

More