DStore: A Lightweight Scalable Learning Model Repository with Fine-Grained Tensor-Level Access - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year :

DStore: A Lightweight Scalable Learning Model Repository with Fine-Grained Tensor-Level Access

Robert Underwood
  • Function : Author
  • PersonId : 1259976
Randal Burns
  • Function : Author
  • PersonId : 1259977
Bogdan Nicolae

Abstract

The ability to share and reuse deep learning (DL) models is a key driver that facilitates the rapid adoption of artificial intelligence (AI) in both industrial and scientific applications. However, stateof-the-art approaches to store and access DL models efficiently at scale lag behind. Most often, DL models are serialized by using various formats (e.g., HDF5, SavedModel) and stored as files on POSIX file systems. While simple and portable, such an approach exhibits high serialization and I/O overheads, especially under concurrency. Additionally, the emergence of advanced AI techniques (transfer learning, sensitivity analysis, explainability, etc.) introduces the need for fine-grained access to tensors to facilitate the extraction and reuse of individual or subsets of tensors. Such patterns are underserved by state-of-the-art approaches. Requiring tensors to be read in bulk incurs suboptimal performance, scales poorly, and/or overutilizes network bandwidth. In this paper we propose a lightweight, distributed, RDMA-enabled learning model repository that addresses these challenges. Specifically we introduce several ideas: compact architecture graph representation with stable hashing and client-side metadata caching, scalable load balancing on multiple providers, RDMA-optimized data staging, and direct access to raw tensor data. We evaluate our proposal in extensive experiments that involve different access patterns using learning models of diverse shapes and sizes. Our evaluations show a significant improvement (between 2 and 30× over a variety of state-of-the-art model storage approaches while scaling to half the Cooley cluster at the Argonne Leadership Computing Facility.
Fichier principal
Vignette du fichier
DStore__A_Lightweight_Scalable_Learning_Model_Repository_with_Fine_Grained_Tensor_Level_Access.pdf (1.06 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04119926 , version 1 (07-06-2023)

Licence

Attribution

Identifiers

Cite

Meghana Madhyastha, Robert Underwood, Randal Burns, Bogdan Nicolae. DStore: A Lightweight Scalable Learning Model Repository with Fine-Grained Tensor-Level Access. ICS'23: The 2023 International Conference on Supercomputing, ACM; IEEE, Jun 2023, Orlando, United States. ⟨10.1145/3577193.3593730⟩. ⟨hal-04119926⟩
23 View
11 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More