Decentralized gradient methods: does topology matter? - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2020

Decentralized gradient methods: does topology matter?

Abstract

Consensus-based distributed optimization methods have recently been advocated as alternatives to parameter server and ring all-reduce paradigms for large scale training of machine learning models. In this case, each worker maintains a local estimate of the optimal parameter vector and iteratively updates it by averaging the estimates obtained from its neighbors, and applying a correction on the basis of its local dataset. While theoretical results suggest that worker communication topology should have strong impact on the number of epochs needed to converge, previous experiments have shown the opposite conclusion. This paper sheds lights on this apparent contradiction and show how sparse topologies can lead to faster convergence even in the absence of communication delays.
Fichier principal
Vignette du fichier
AISTATS2020.pdf (5.84 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-02430485 , version 1 (07-01-2020)

Identifiers

  • HAL Id : hal-02430485 , version 1

Cite

Giovanni Neglia, Chuan Xu, Don Towsley, Gianmarco Calbi. Decentralized gradient methods: does topology matter?. AISTATS 2020 - 23rd International Conference on Artificial Intelligence and Statistics, Aug 2020, Palermo /Online, Italy. ⟨hal-02430485⟩
161 View
244 Download

Share

Gmail Facebook X LinkedIn More