Charon: A Unified and Fine-Grained Simulator for Large-Scale LLM Training and Inference
Abstract
Deploying large-scale LLM training and inference with optimal performance is exceptionally challenging due to a complex design space of parallelism strategies, system optimizations, and hardware configurations. Accurate and rapid performance simulation is critical for guiding optimization efforts and system studies by validating “what-if” Hooker Figure hypotheses. To address this, we introduce Charon, a unified, modular, and fine-grained simulator for accurately predicting LLM performance. Experiments show Charon achieves high accuracy across different models and configurations, with an overall prediction error consistently under 5.35%, and even under 3.74% for training with over 10,000 GPUs. In a practical inference deployment case, Charon discovered a configuration that improved system throughput by 275% over a manually-tuned baseline, demonstrating its significant real-world value.