Accelerating DNN Architecture Search at Scale Using Selective Weight Transfer - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Accelerating DNN Architecture Search at Scale Using Selective Weight Transfer

Résumé

Deep learning applications are rapidly gaining traction both in industry and scientific computing. Unsurprisingly, there has been significant interest in adopting deep learning at a very large scale on supercomputing infrastructures for a variety of scientific applications. A key issue in this context is how to find an appropriate model architecture that is suitable to solve the problem. We call this the neural architecture search (NAS) problem. Over time, many automated approaches have been proposed that can explore a large number of candidate models. However, this remains a time-consuming and resource expensive process: the candidates are often trained from scratch for a small number of epochs in order to obtain a set of top-K best performers, which are fully trained in a second phase. To address this problem, we propose a novel method that leverages checkpoints of previously discovered candidates to accelerate NAS. Based on the observation that the candidates feature high structural similarity, we propose the idea that new candidates need not be trained starting from random weights, but rather from the weights of similar layers of previously evaluated candidates. Thanks to this approach, the convergence of the candidate models can be significantly accelerated and produces candidates that are statistically better based on the objective metrics. Furthermore, once the top-K models are identified, our approach provides a significant speed-up (1.4-1.5x on the average) for the full training.
Fichier principal
Vignette du fichier
Accelerating_DNN_Architecture_Search_at_Scale_Using_Selective_Weight_Transfer.pdf (545.13 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03341805 , version 1 (12-09-2021)

Identifiants

  • HAL Id : hal-03341805 , version 1

Citer

Hongyuan Liu, Bogdan Nicolae, Sheng Di, Franck Cappello, Adwait Jog. Accelerating DNN Architecture Search at Scale Using Selective Weight Transfer. CLUSTER'21: The 2021 IEEE International Conference on Cluster Computing, Sep 2021, Portland, United States. ⟨hal-03341805⟩
56 Consultations
141 Téléchargements

Partager

Gmail Facebook X LinkedIn More