Timezone: »
Finding the best VM configuration is key to achieve lower cost and higher throughput, two primary concerns in cloud-based distributed neural network (NN) training today. Optimal VM selection that meets user constraints requires efficiently navigating a large search space while controlling for the performance variance associated with sharing cloud instances and networks.In this work, we characterize this variance in the context of distributed NN training and present results of a comprehensive throughput and cost-efficiency study we conducted across a wide array of instances to prune for the optimal VM search space. Using insights from these studies, we built Srifty, a system that combines runtime profiling with learned performance models to accurately predict training performance and find the best VM choice that satisfies user constraints, potentially leveraging both heterogeneous setups and spot instances. We integrated Srifty with PyTorch and evaluated it on Amazon EC2. We conducted a large-scale generalization study of Srifty across more than 2K training setups on EC2. Our results show that Srifty achieves an iteration latency prediction error of 8%, and its VM instance recommendations offer significant throughput gain and cost reduction while satisfying user constraints compared to existing solutions in complex, real-world scenarios.
Author Information
Liang Luo (University of Washington)
Liang Luo (University of Washington)
Peter West (University of Washington)
Peter West (University of Washington)
Pratyush Patel (University of Washington)
Pratyush Patel (University of Washington)
Arvind Krishnamurthy (University of Washington)
Luis Ceze (University of Washington and OctoML)
Luis Ceze (University of Washington and OctoML)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: SRIFTY: Swift and Thrifty Distributed Neural Network Training on the Cloud »
Dates n/a. Room
More from the Same Authors
-
2022 Poster: DietCode: Automatic Optimization for Dynamic Tensor Programs »
Bojian Zheng · Ziheng Jiang · Cody Hao Yu · Haichen Shen · Joshua Fromm · Yizhi Liu · Yida Wang · Luis Ceze · Tianqi Chen · Gennady Pekhimenko -
2022 Oral: DietCode: Automatic Optimization for Dynamic Tensor Programs »
Bojian Zheng · Ziheng Jiang · Cody Hao Yu · Haichen Shen · Joshua Fromm · Yizhi Liu · Yida Wang · Luis Ceze · Tianqi Chen · Gennady Pekhimenko -
2021 : Thoughts on Research, Community and Impact »
Luis Ceze -
2021 : Panel Discussion »
Luis Ceze · Cliff Young · Chris Lattner -
2020 Oral: Riptide: Fast End-to-End Binarized Neural Networks »
Joshua Fromm · Meghan Cowan · Matthai Philipose · Luis Ceze · Shwetak Patel -
2020 Poster: PLink: Discovering and Exploiting Locality for Accelerated Distributed Training on the public Cloud »
Liang Luo · Peter West · Jacob Nelson · Arvind Krishnamurthy · Luis Ceze -
2020 Poster: Riptide: Fast End-to-End Binarized Neural Networks »
Joshua Fromm · Meghan Cowan · Matthai Philipose · Luis Ceze · Shwetak Patel -
2020 Oral: PLink: Discovering and Exploiting Locality for Accelerated Distributed Training on the public Cloud »
Liang Luo · Peter West · Jacob Nelson · Arvind Krishnamurthy · Luis Ceze