Stash: A Comprehensive Stall-Centric Characterization of Public Cloud VMs for Distributed Deep Learning

Aakash Sharma, Vivek M. Bhasi, Sonali Singh, Rishabh Jain, Jashwant Raj Gunasekaran, Subrata Mitra, Mahmut Taylan Kandemir, George Kesidis, Chita R. Das

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

Deep neural networks (DNNs) are increasingly popular owing to their ability to solve complex problems such as image recognition, autonomous driving, and natural language processing. Their growing complexity coupled with the use of larger volumes of training data (to achieve acceptable accuracy) has warranted the use of GPUs and other accelerators. Such accelerators are typically expensive, with users having to pay a high upfront cost to acquire them. For infrequent use, users can, instead, leverage the public cloud to mitigate the high acquisition cost. However, with the wide diversity of hardware instances (particularly GPU instances) available in public cloud, it becomes challenging for a user to make an appropriate choice from a cost/performance standpoint. In this work, we try to address this problem by (i) introducing a comprehensive distributed deep learning (DDL) profiler Stash, which determines the various execution stalls that DDL suffers from, and (ii) using Stash to extensively characterize various public cloud GPU instances by running popular DNN models on them. Specifically, it estimates two types of communication stalls, namely, interconnect and network stalls, that play a dominant role in DDL execution time. Stash is implemented on top of prior work, DS-Analyzer, that computes only the CPU and disk stalls. Using our detailed stall characterization, we list the advantages and shortcomings of public cloud GPU instances for users to help them make an informed decision(s). Our characterization results indicate that the more expensive GPU instances may not be the most performant for all DNN models and that AWS can sometimes sub-optimally allocate hardware interconnect resources. Specifically, the intra-machine interconnect can introduce communication overheads of up to 90% of DNN training time and the network-connected instances can suffer from up to 5× slowdown compared to training on a single instance. Furthermore, (iii) we also model the impact of DNN macroscopic features such as the number of layers and the number of gradients on communication stalls, and finally, (iv) we briefly discuss a cost comparison with existing work.

Original languageEnglish (US)
Title of host publicationProceedings - 2023 IEEE 43rd International Conference on Distributed Computing Systems, ICDCS 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages875-886
Number of pages12
ISBN (Electronic)9798350339864
DOIs
StatePublished - 2023
Event43rd IEEE International Conference on Distributed Computing Systems, ICDCS 2023 - Hong Kong, China
Duration: Jul 18 2023Jul 21 2023

Publication series

NameProceedings - International Conference on Distributed Computing Systems
Volume2023-July

Conference

Conference43rd IEEE International Conference on Distributed Computing Systems, ICDCS 2023
Country/TerritoryChina
CityHong Kong
Period7/18/237/21/23

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Stash: A Comprehensive Stall-Centric Characterization of Public Cloud VMs for Distributed Deep Learning'. Together they form a unique fingerprint.

Cite this