Weight Offloading Strategies for Training Large DNN Models - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Weight Offloading Strategies for Training Large DNN Models

Résumé

The limited memory of GPUs induces serious problems in the training phase of deep neural networks (DNNs). Indeed, with the recent tremendous increase in the size of DNN models, which can now routinely include hundreds of billions or even trillions of parameters, it is impossible to store these models in the memory of a GPU and several strategies have been devised to solve this problem. In this paper, we analyze in detail the strategy that consists in offloading the weights of some model layers from the GPU to the CPU when they are not used. Since the PCI bus bandwidth between the GPU and the CPU is limited, it is crucial to know which layers should be transferred (offloaded and prefetched) and when. We prove that this problem is in general NP-Complete in the strong sense and we propose a lower bound formulation in the form of an Integer Linear Program (ILP). We propose heuristics to select the layers to offload and to build the schedule of data transfers. We show that this approach allows to build near-optimal weight offloading strategies on realistic size DNNs and architectures.
Fichier principal
Vignette du fichier
rr.pdf (467.3 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03580767 , version 1 (18-02-2022)

Identifiants

  • HAL Id : hal-03580767 , version 1

Citer

Olivier Beaumont, Lionel Eyraud-Dubois, Alena Shilova, Xunyi Zhao. Weight Offloading Strategies for Training Large DNN Models. 2022. ⟨hal-03580767⟩
107 Consultations
351 Téléchargements

Partager

Gmail Facebook X LinkedIn More