Learning from Less: Measuring the Effectiveness of RLVR in Low Data and Compute Regimes
Justin Bauer ⋅ Thomas Walshe ⋅ Derek Pham ⋅ Harit Vishwakarma ⋅ Armin Parchami ⋅ Frederic Sala ⋅ Paroma Varma
Abstract
Fine-tuning Large Language Models (LLMs) typically relies on large quantities of high-quality annotated data, or questions with well-defined ground truth answers in the case of Reinforcement Learning with Verifiable Rewards (RLVR). While previous work has explored the benefits to model reasoning capabilities by scaling both data and compute used for RLVR, these results lack applicability in many real-world settings where annotated data and accessible compute may be scarce. In this work, we present a comprehensive empirical study of open-source Small Language Model (SLM) performance after RLVR in low data regimes. Across three novel datasets covering number counting problems, graph reasoning, and spatial reasoning, we characterize how model performance scales with both dataset size, diversity, and complexity. We demonstrate that (1) procedural datasets allow for fine-grained evaluation and training dataset development with controllable properties (size, diversity, and complexity), (2) RLVR enables models trained on lower complexity tasks to generalize to higher complexity tasks, and (3) training on mixed complexity datasets offers the greatest benefits in low data regimes, providing up to 5$\times$ sample efficiency versus training on easy tasks. These findings inspire future work on the development of data scaling laws for RLVR and the use of procedural data generators to further understand effective data development for efficient LLM fine-tuning.
Chat is not available.
Successful Page Load