DisAgg: Distributed Aggregators for Efficient Secure Aggregation
Haaris Mehmood ⋅ Giorgos Tatsis ⋅ Dimitrios Alexopoulos ⋅ Karthikeyan Saravanan ⋅ Jie Xi ⋅ ⋅ Mete Ozay
Abstract
Federated learning enables collaborative model training across distributed clients, yet vanilla FL exposes client updates to the central server. Secure‑aggregation schemes protect privacy against an honest‑but‑curious server, but existing approaches often suffer from many communication rounds, heavy public‑key operations, or difficulty handling client dropouts. Recent methods like One‑Shot Private Aggregation (OPA) cut rounds to a single server interaction per FL iteration, yet they impose substantial cryptographic and computational overhead on both server and clients. We propose a new protocol that leverages a small committee of clients called \textit{aggregators} to perform the aggregation itself: each client secret‑shares its update vector to aggregators, which locally compute partial sums and return only aggregated shares for server‑side reconstruction. This design eliminates local masking and expensive homomorphic encryption, reducing endpoint computation while preserving privacy against a curious server and a limited fraction of colluding clients. By leveraging optimal trade-offs between communication and computation costs, extensive experiments with upto 50k users and 10k‑dimensional update vectors show that our protocol is at least $1.9\times$ faster than OPA, the previous best protocol.
Chat is not available.
Successful Page Load