Session
Research-Track Oral Presentation: R8: Federated Learning
Grand Ballroom 1
DisAgg: Distributed Aggregators for Efficient Secure Aggregation
Haaris Mehmood ⋅ Giorgos Tatsis ⋅ Dimitrios Alexopoulos ⋅ Karthikeyan Saravanan ⋅ Jie Xi ⋅ ⋅ Mete Ozay
Federated learning enables collaborative model training across distributed clients, yet vanilla FL exposes client updates to the central server. Secure‑aggregation schemes protect privacy against an honest‑but‑curious server, but existing approaches often suffer from many communication rounds, heavy public‑key operations, or difficulty handling client dropouts. Recent methods like One‑Shot Private Aggregation (OPA) cut rounds to a single server interaction per FL iteration, yet they impose substantial cryptographic and computational overhead on both server and clients. We propose a new protocol that leverages a small committee of clients called \textit{aggregators} to perform the aggregation itself: each client secret‑shares its update vector to aggregators, which locally compute partial sums and return only aggregated shares for server‑side reconstruction. This design eliminates local masking and expensive homomorphic encryption, reducing endpoint computation while preserving privacy against a curious server and a limited fraction of colluding clients. By leveraging optimal trade-offs between communication and computation costs, extensive experiments with upto 50k users and 10k‑dimensional update vectors show that our protocol is at least $1.9\times$ faster than OPA, the previous best protocol.
FLoRIST: Singular Value Thresholding for Efficient and Accurate Federated Fine-Tuning of Large Language Models
Hariharan Ramesh ⋅ Jyotikrishna Dass
Integrating Low-Rank Adaptation (LoRA) into federated learning offers a promising solution for parameter-efficient fine-tuning of Large Language Models (LLMs) without sharing local data. However, several methods designed for federated LoRA present significant challenges in balancing communication efficiency, model accuracy, and computational cost, particularly among heterogeneous clients. These methods either rely on simplistic averaging of local adapters, which introduces aggregation noise, require transmitting large stacked local adapters, leading to poor communication efficiency, or necessitate reconstructing memory-dense global weight-update matrix and performing computationally expensive decomposition to design client-specific low-rank adapters. In this work, we propose FLoRIST, a federated fine-tuning framework that achieves mathematically accurate aggregation without incurring high communication or computational overhead. Instead of constructing the full global weight-update matrix at the server, FLoRIST employs an efficient decomposition pipeline by performing singular value decomposition on stacked local adapters separately. This approach operates within a compact intermediate space to represent the accumulated information from local LoRAs. We introduce tunable singular value thresholding for server-side optimal rank selection to construct a pair of global low-rank adapters shared by all clients. Extensive empirical evaluations across multiple datasets and LLMs demonstrate that FLoRIST consistently strikes the best balance between superior communication efficiency and competitive performance in both homogeneous and heterogeneous setups.
PLayer-FL: A Principled Approach to Personalized Layer-wise Cross-Silo Federated Learning
Ahmed Elhussein ⋅ Florent Pollet ⋅ Gamze Gursoy
Federated learning (FL) with non-IID data often degrades client performance below local training baselines. Partial FL addresses this by federating only early layers that learn transferable features, but existing methods rely on ad-hoc, architecture-specific heuristics. We first conduct a systematic analysis of layer-wise generalization dynamics in FL, revealing an early-emerging transition between generalizable (safe-to-federate) and task-specific (should-remain-local) layers. Building on this, we introduce Principled Layer-wise Federated Learning (PLayer-FL), which aims to deliver the benefits of federation more robustly. PLayer-FL computes a novel federation-sensitivity metric efficiently after a single training epoch to choose the optimal split point for a given task. Inspired by model pruning, the metric quantifies each layer’s robustness to aggregation and highlights where federation shifts from beneficial to detrimental. We show that this metric correlates strongly with established generalization measures across diverse architectures. Crucially, experiments demonstrate that PLayer-FL achieves consistently competitive performance across a wide range of tasks while distributing gains more equitably and reducing client-side regressions relative to baselines.
ProToken: Token-Level Attribution for Federated Large Language Models
Waris Gill ⋅ Ahmad Humayun ⋅ Ali Anwar ⋅ Muhammad Ali Gulzar
Federated Learning (FL) enables collaborative training of Large Language Models (LLMs) across distributed data sources while preserving privacy. However, when federated LLMs are deployed in critical applications, it remains unclear which client(s) contributed to specific generated responses, hindering debugging, malicious client identification, fair reward allocation, and trust verification. We present ProToken, a novel Provenance methodology for Token-level attribution in federated LLMs that addresses client attribution during autoregressive text generation while maintaining FL privacy constraints. ProToken leverages two key insights to enable provenance at each token: (1) transformer architectures concentrate task-specific signals in later blocks, enabling strategic layer selection for computational tractability, and (2) gradient-based relevance weighting filters out irrelevant neural activations, focusing attribution on neurons that directly influence token generation. We evaluate ProToken across 16 configurations spanning four LLM architectures (Gemma, Llama, Qwen, SmolLM) and four domains (medical, financial, mathematical, coding). ProToken achieves 98.62% average attribution accuracy in correctly localizing responsible client(s), and maintains high accuracy when the number of clients are scaled, validating its practical viability for real-world deployment settings.
SONAR: Benchmarking Topology and Collaboration in Decentralized Learning
Joyce Yuan ⋅ Yichuan Shi ⋅ Abhishek Singh ⋅ Rishi Sharma ⋅ Ramesh Raskar ⋅ ⋅ Martin Jaggi
The performance, efficiency, and reliability of decentralized machine learning hinge on systems factors such as network topology, communication budget, and device heterogeneity—yet existing frameworks treat these as fixed or opaque. Federated learning remains centrally orchestrated, while peer-to-peer (P2P) approaches lack a unified foundation for analyzing how topology and system design jointly shape learning outcomes. We present \textbf{SONAR}, a systems framework for reproducible, topology-aware decentralized learning. SONAR unifies communication, topology, and telemetry in a layered architecture supporting multiple backends (gRPC, MPI, WebRTC), static and adaptive graphs, and per-node logging of bandwidth, latency, and collaboration dynamics. Using SONAR, we make three observations: (1) topology and its graph-level statistics show no consistent or linear correlation with learning performance across accuracy, robustness, and privacy metrics, underscoring the need to study topology as an independent systems variable; (2) under realistic constraints such as limited communication rounds or bandwidth, topology governs how quickly information propagates—producing up to ≈ 20% performance differences between graph families; and (3) adaptive neighbor selection can induce collaborator collapse—a failure mode where network diversity erodes over time. By exposing topology as a first-class experimental dimension, SONAR enables systematic, reproducible evaluation of decentralized learning across performance, efficiency, and robustness regimes.
Zero redundancy distributed learning with differential privacy
Zhiqi Bu ⋅ Justin Chiu ⋅ Ruixuan Liu ⋅ Sheng Zha ⋅ George Karypis
Deep learning using large models has achieved great success in a wide range of domains. However, training these models on billions of parameters is very challenging in terms of training speed, memory cost, and communication efficiency, especially under the privacy-preserving regime with differential privacy (DP). On the one hand, the efficiency of DP optimization is comparable to that of standard non-DP optimization on a single GPU, but existing DP distributed learning is significantly inefficient on multiple GPUs. On the other hand, the Zero Redundancy Optimizer (ZeRO) is a state-of-the-art solution to the standard distributed learning, which can be technically complicated to work compatibly with DP. In this work, we develop a new systematic solution, DP-ZeRO, (I) to scale up the trainable DP model size, e.g. to GPT-100B, (II) to obtain the same computation and communication efficiency as the standard ZeRO, and (III) to enable mixed-precision DP training. Our DP-ZeRO, like the standard ZeRO, has the potential to train models with arbitrary size and exhibits excellent training efficiency on large models. Code at \url{https://anonymous.4open.science/r/fast-differential-privacy-3B50}.