Skip to yearly menu bar Skip to main content


Session

Session 12: Training (II)

Moderator: Carole-Jean Wu

Abstract:
Chat is not available.

Thu 8 April 15:20 - 15:40 PDT

Towards Scalable Distributed Training of Deep Learning on Public Cloud Clusters

Shaohuai Shi · Xianhao Zhou · Shutao Song · Xingyao Wang · Zilin Zhu · Xue Huang · Xinan Jiang · Feihu Zhou · Zhenyu Guo · Liqiang Xie · Rui Lan · Xianbin Ouyang · Yan Zhang · Jieqian Wei · Jing Gong · Weiliang Lin · Ping Gao · Peng Meng · Xiaomin Xu · Chenyang Guo · Bo Yang · Zhibo Chen · Yongjian Wu · Xiaowen Chu

Distributed training techniques have been widely deployed in large-scale deep models training on dense-GPU clusters. However, on public cloud clusters, due to the moderate inter-connection bandwidth between instances, traditional state-of-the-art distributed training systems cannot scale well in training large-scale models. In this paper, we propose a new computing and communication efficient top-k sparsification communication library for distributed training. To further improve the system scalability, we optimize I/O by proposing a simple yet efficient multi-level data caching mechanism and optimize the update operation by introducing a novel parallel tensor operator. Experimental results on a 16-node Tencent Cloud cluster (each node with 8 Nvidia Tesla V100 GPUs) show that our system achieves 25%-40% faster than existing state-of-the-art systems on CNNs and Transformer. We finally break the record on DAWNBench on training ResNet-50 to 93% top-5 accuracy on ImageNet.

Thu 8 April 15:40 - 16:00 PDT

Understanding and Improving Failure Tolerant Training for Deep Learning Recommendation with Partial Recovery

Kiwan Maeng · Shivam Bharuka · Isabel Gao · Mark Jeffrey · Vikram Saraph · Bor-Yiing Su · Caroline Trippel · Jiyan Yang · Mike Rabbat · Brandon Lucia · Carole-Jean Wu

The paper proposes and optimizes a partial recovery training system, CPR, for recommendation models. CPR relaxes the consistency requirement by enabling non-failed nodes to proceed without loading checkpoints when a node fails during training, improving failure-related overheads. The paper is the first to the extent of our knowledge to perform a data-driven, in-depth analysis of applying partial recovery to recommendation models and identified a trade-off between accuracy and performance. Motivated by the analysis, we present CPR, a partial recovery training system that can reduce the training time and maintain the desired level of model accuracy by (1) estimating the benefit of partial recovery, (2) selecting an appropriate checkpoint saving interval, and (3) prioritizing to save updates of more frequently accessed parameters. Two variants of CPR, CPR-MFU and CPR-SSU, reduce the checkpoint-related overhead from 8.2--8.5% to 0.53--0.68% compared to full recovery, on a setup emulating the failure pattern and overhead of a production-scale cluster. While reducing overhead significantly, CPR achieves model quality on par with the more expensive full recovery scheme, training the state-of-the-art recommendation model using Criteo’s Terabyte CTR dataset. Our results also suggest that CPR can speed up training on a real production-scale cluster, without notably degrading the accuracy.

Thu 8 April 16:00 - 16:20 PDT

Wavelet: Efficient DNN Training with Tick-Tock Scheduling

Guanhua Wang · Kehan Wang · Kenan Jiang · XIANGJUN LI · Ion Stoica

DNNs have revolutionized across a wide range of applications, such as image classification, speech recognition and robotics control. As DNN models become more computationally expensive to train, parallel execution with multiple accelerators (e.g. GPUs) is adopted. System efficiency is a big issue when scaling out. However, as computation power increases, GPUs are under-utilized mainly due to limited local memory size. To address this memory bound, we present Wavelet, an efficient and generic approach that can fully utilize all the available on-device memory among GPUs involved in the distributed training job. Wavelet achieves near optimal on-device memory usage by adopting a simple scheduling scheme called Tick-Tock, which interleaves waves of peak memory usage among the accelerators. Evaluations on a variety of DNN models and tasks show that, Wavelet trains models up to 6.7x faster than commonly used parallelism techniques.

Thu 8 April 16:20 - 16:40 PDT

Pipelined Backpropagation at Scale: Training Large Models without Batches

Atli Kosson · Vitaliy Chiley · Abhinav Venigalla · Joel Hestness · Urs Koster

New hardware can substantially increase the speed and efficiency of deep neural network training. To guide the development of future hardware architectures, it is pertinent to explore the hardware and machine learning properties of alternative training algorithms. In this work we evaluate the use of small batch, fine-grained Pipelined Backpropagation, an asynchronous pipeline parallel training algorithm that has significant hardware advantages. We introduce two methods, Spike Compensation and Linear Weight Prediction, that effectively mitigate the downsides caused by the asynchronicity of Pipelined Backpropagation and outperform existing techniques in our setting. We show that appropriate normalization and small batch sizes can also aid training. With our methods, fine-grained Pipelined Backpropagation using a batch size of one can match the accuracy of SGD for multiple networks trained on CIFAR-10 and ImageNet. Simple scaling rules allow the use of existing hyperparameters for traditional training without additional tuning.