Efficient Data-Parallel Continual Learning with Asynchronous Distributed Rehearsal Buffers - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

Efficient Data-Parallel Continual Learning with Asynchronous Distributed Rehearsal Buffers

Résumé

Deep learning has emerged as a powerful method for extracting valuable information from large volumes of data. However, when new training data arrives continuously (i.e., is not fully available from the beginning), incremental training suffers from catastrophic forgetting (i.e., new patterns are reinforced at the expense of previously acquired knowledge). Training from scratch each time new training data becomes available would result in extremely long training times and massive data accumulation. Rehearsal-based continual learning has shown promise for addressing the catastrophic forgetting challenge, but research to date has not addressed performance and scalability. To fill this gap, we propose an approach based on a distributed rehearsal buffer that efficiently complements data-parallel training on multiple GPUs, allowing us to achieve short runtime and scalability while retaining high accuracy. It leverages a set of buffers (local to each GPU) and uses several asynchronous techniques for updating these local buffers in an embarrassingly parallel fashion, all while handling the communication overheads necessary to augment input mini-batches (groups of training samples fed to the model) using unbiased, global sampling. In this paper we explore the benefits of this approach for classification models. We run extensive experiments on up to 128 GPUs of the ThetaGPU supercomputer to compare our approach with baselines representative of training-from-scratch (the upper bound in terms of accuracy) and incremental training (the lower bound). Results show that rehearsal-based continual learning achieves a top-5 classification accuracy close to the upper bound, while simultaneously exhibiting a runtime close to the lower bound.
Fichier principal
Vignette du fichier
paper.pdf (1.82 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04600107 , version 1 (04-06-2024)

Licence

Identifiants

Citer

Thomas Bouvier, Bogdan Nicolae, Hugo Chaugier, Alexandru Costan, Ian Foster, et al.. Efficient Data-Parallel Continual Learning with Asynchronous Distributed Rehearsal Buffers. CCGrid 2024 - IEEE 24th International Symposium on Cluster, Cloud and Internet Computing, May 2024, Philadelphia (PA), United States. pp.1-10, ⟨10.1109/CCGrid59990.2024.00036⟩. ⟨hal-04600107⟩
23 Consultations
35 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More