Elasticity—scaling out or in depending upon resource demand or availability—allows a system to improve its efficiency or performance. This leads to potentially significant cost savings and shorter job completion times. However, elasticity is not possible in today’s distributed deep learning deployments, in large part because the most widely used frameworks such as TensorFlow are built assuming that resource allocation must be fixed throughout the lifetime of the job.
In this work, we demonstrate that these assumptions are not fundamental to distributed deep learning and present the first autoscaling engine for these workloads. Our system takes into account cost as well as scaling efficiency when making scaling decisions. As a side benefit, we reuse the same autoscaling mechanisms to remove persistent stragglers. We evaluate our system by training ResNet on CIFAR-10 and ImageNet, and we find a reduction in job completion time of up to 45% and a reduction in GPU time of up to 85.1%.
Andrew Or (Princeton University)
Haoyu Zhang (Google AI)
Michael Freedman (Princeton University)
Related Events (a corresponding poster, oral, or spotlight)
2020 Poster: Resource Elasticity in Distributed Deep Learning »
Tue Mar 3rd 12:30 -- 03:00 AM Room Ballroom A