FedFTHA: A Fine-Tuning and Head Aggregation Method in Federated Learning
journal contribution
posted on 2024-11-03, 09:25authored byYansong Wang, Hui Xu, Waqar Ali, Miaobo Li, Xiangmin ZhouXiangmin Zhou, Jie Shao
Personalized federated learning is a sub-field of federated learning. Contrary to conventional federated learning that expects to find a general global model, personalized federated learning generates a personalized model adapted to its local data distribution for each client. Some existing personalized federated learning methods only consider improving the client-side personalization ability, discarding the server-side generalization capacity. To address this issue, we propose a fine-tuning and head aggregation method in federated learning (FedFTHA). It allows each client to maintain a personalized model head and fine-tune it after each local update to generate a local model containing the personalized head. Specifically, during FedFTHA training, these personalized heads are aggregated to generate a generalized head for the global model. FedFTHA meets the needs of both client-side model personalization and server-side model generalization. In addition, a universal optimization framework is employed to prove its convergence under convex and non-convex conditions. We verify the personalization ability and generalization performance of FedFTHA under heterogeneous settings with benchmark datasets. The comparative analysis authenticates the significance of our proposal.