Skip to yearly menu bar Skip to main content


Poster

Photon: Federated LLM Pre-Training

Lorenzo Sani · Alex Iacob · Zeyu Cao · Royson Lee · Bill Marino · Yan Gao · Wanru Zhao · Dongqi Cai · Zexi Li · Xinchi Qiu · Nic Lane


Abstract:

Scaling large language models (LLMs) demands extensive data and computing resources, which are traditionally constrained to data centers by the high-bandwidth requirements of distributed training. Low-bandwidth methods like federated learning (FL) could enable collaborative training of larger models across weakly-connected GPUs if they can effectively be used for pre-training. To achieve this, we introduce SystemX, the first complete system for federated end-to-end LLM training, leveraging cross-silo FL for global-scale training with minimal communication overheads. Using SystemX, we train the first federated family of decoder-only LLMs from scratch. We show that: (1) SystemX can train model sizes up to 7B in a federated fashion while reaching an even better perplexity than centralized pre-training; (2) SystemX model training time decreases with available compute, achieving a similar compute-time trade-off to centralized; and (3) SystemX outperforms the wall-time of baseline distributed training methods by 35% via communicating 64×–512×less. Our proposal is robust to data heterogeneity and converges twice as fast as previous methods like DiLoCo. This surprising data efficiency stems from a unique approach combining small client batch sizes with extremely high learning rates, enabled by federated averaging’s robustness to hyperparameters. SystemX thus represents the first economical system for global internet-wide LLM pre-training.

Chat is not available.