Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2021

Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees

Abstract

Asynchronous distributed algorithms are a popular way to reduce synchronization costs in large-scale optimization, and in particular for neural network training. However, for nonsmooth and nonconvex objectives, few convergence guarantees exist beyond cases where closed-form proximal operator solutions are available. As training most popular deep neural networks corresponds to optimizing nonsmooth and nonconvex objectives, there is a pressing need for such convergence guarantees. In this paper, we analyze for the first time the convergence of stochastic asynchronous optimization for this general class of objectives. In particular, we focus on stochastic subgradient methods allowing for block variable partitioning, where the shared model is asynchronously updated by concurrent processes. To this end, we use a probabilistic model which captures key features of real asynchronous scheduling between concurrent processes. Under this model, we establish convergence with probability one to an invariant set for stochastic subgradient methods with momentum. From a practical perspective, one issue with the family of algorithms that we consider is that they are not efficiently supported by machine learning frameworks, which mostly focus on distributed data-parallel strategies. To address this, we propose a new implementation strategy for shared-memory based training of deep neural networks for a partitioned but shared model in single-and multi-GPU settings. Based on this implementation, we achieve on average about 1.2x speedup in comparison to state-of-the-art training methods for popular image classification tasks, without compromising accuracy.
Fichier principal
Vignette du fichier
AAAI_2021_Kungurtsev_Egan_Chatterjee_Alistarh.pdf (187.26 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03266812 , version 1 (22-06-2021)

Identifiers

Cite

Vyacheslav Kungurtsev, Malcolm Egan, Bapi Chatterjee, Dan Alistarh. Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees. AAAI 2021 - 35th Conference on Artificial Intelligence, Feb 2021, Virtual, United States. pp.1-8. ⟨hal-03266812⟩
56 View
48 Download

Altmetric

Share

Gmail Facebook X LinkedIn More