Skip to yearly menu bar Skip to main content


Poster

μ-TWO: 3× Faster Multi-Model Training with Orchestration and Memory Optimization

Sanket Purandare · Abdul Wasay · Stratos Idreos · Animesh Jain

Ballroom B - Position 41

Abstract:

In this paper, we identify that modern GPUs - the key platform for developing neural networks - are being severely underutilized, with ∼ 50% utilization, that further drops as GPUs get faster. We show that state-of-the-art training techniques that employ operator fusion and larger mini-batch size to improve GPU utilization are limited by memory and do not scale with the size and number of models. Additionally, we show that using state-of-the art data swapping techniques (between GPU and host memory) to address GPU memory limitations lead to massive computation stalls as network sizes grow.We introduce μ-two, a novel compiler that maximizes GPU utilization. At the core of μ-two is an approach that leverages selective data swapping from GPU to host memory only when absolutely necessary, and maximally overlaps data movement with independent computation operations such that GPUs never have to wait for data. By collecting accurate run-time statistics and data dependencies, μ-two automatically fuses operators across different models, and precisely schedules data movement and computation operations to enable concurrent training of multiple models with minimum stall time. We show how to generate μ-two schedules for diverse neural network and GPU architectures and integrate μ-two into the PyTorch framework. Our experiments show that μ-two can achieve up to a 3× speed-up across a range of network architectures and hardware, spanning vision, natural language processing, and recommendation applications.

Chat is not available.