Weight Offloading Strategies for Training Large DNN Models - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Preprints, Working Papers, ... Year :

Weight Offloading Strategies for Training Large DNN Models

Abstract

The limited memory of GPUs induces serious problems in the training phase of deep neural networks (DNNs). Indeed, with the recent tremendous increase in the size of DNN models, which can now routinely include hundreds of billions or even trillions of parameters, it is impossible to store these models in the memory of a GPU and several strategies have been devised to solve this problem. In this paper, we analyze in detail the strategy that consists in offloading the weights of some model layers from the GPU to the CPU when they are not used. Since the PCI bus bandwidth between the GPU and the CPU is limited, it is crucial to know which layers should be transferred (offloaded and prefetched) and when. We prove that this problem is in general NP-Complete in the strong sense and we propose a lower bound formulation in the form of an Integer Linear Program (ILP). We propose heuristics to select the layers to offload and to build the schedule of data transfers. We show that this approach allows to build near-optimal weight offloading strategies on realistic size DNNs and architectures.
Fichier principal
Vignette du fichier
rr.pdf (467.3 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03580767 , version 1 (18-02-2022)

Identifiers

  • HAL Id : hal-03580767 , version 1

Cite

Olivier Beaumont, Lionel Eyraud-Dubois, Alena Shilova, Xunyi Zhao. Weight Offloading Strategies for Training Large DNN Models. 2022. ⟨hal-03580767⟩
74 View
129 Download

Share

Gmail Facebook Twitter LinkedIn More