HetRL: Efficient Reinforcement Learning for LLMs in Heterogeneous Environments
Abstract
As large language models (LLMs) scale and new GPUs are released even more frequent, there is an increasing demand for LLM post-training in heterogeneous environments to fully leverage underutilized mid-range or previous-generation GPUs across regions and alleviate the shortage of homogeneous high-end GPUs in a single region. However, achieving high-performance reinforcement learning (RL) training for LLMs on such computing resources remains challenging because the workflow involves multiple models and tasks with complex computation and data dependencies. In this paper, we present HetRL, a distributed system for efficient RL training in infrastructures with heterogeneous GPUs and network. HetRL formulates RL training scheduling in heterogeneous environments as a constrained joint optimization problem and introduces a novel scheduling algorithm that (1) decomposes the complex search space with a multi-level search framework; and (2) allocates the search budget via successive halving. Our extensive evaluation consuming 20,000 GPU-hours shows that HetRL delivers up to 9.17× and 3.17× on average the throughput of state-of-the-art systems under various workloads and settings.