Skip to yearly menu bar Skip to main content


Poster

FedTrans: Efficient Federated Learning via Multi-Model Transformation

Yuxuan Zhu · Jiachen Liu · Mosharaf Chowdhury · Fan Lai

[ ]
Wed 15 May 3:30 p.m. PDT — 3:50 p.m. PDT

Abstract: Federated learning (FL) aims to train machine learning (ML) models across potentially millions of edge client devices. Yet, training and customizing models for FL clients is notoriously challenging due to the heterogeneity of client data, device capabilities, and the massive scale of clients, making individualized model exploration prohibitively expensive. State-of-the-art FL solutions personalize a globally trained model or concurrently train multiple models, but they often incur suboptimal model accuracy and huge training costs. In this paper, we introduce FedTrans, a multi-model FL training framework that automatically produces and trains high-accuracy, hardware-compatible models for individual clients at scale.FedTrans begins with a basic global model, identifies accuracy bottlenecks in model architectures during training, and then employs model transformation to derive new models for heterogeneous clients on the fly. It judiciously assigns models to individual clients while performing soft aggregation on multi-model updates to minimize total training costs. Our evaluations using realistic settings show that FedTrans improves individual client model accuracy by 13\% while slashing training costs by 4$\times$ over state-of-the-art solutions.

Chat is not available.