Towards Boosting Federated Learning Convergence: A Computation Offloading & Clustering Approach
Source of Publication
ICC 2023 - IEEE International Conference on Communications
With an innovative door opened for a new era of Machine Learning, Federated Learning (FL) is now revolutionizing Artificial Intelligence. It exploits both decentralized data and decentralized computation to preserve user privacy. Albeit its popularity and being the most widely used framework nowadays, FL becomes a sub-optimal solution when the convergence of the global model occurs at a slow pace, which exacerbates the communication bottlenecks. To address this challenge, we propose in this paper CISCO-FL, a Clustered FL with Intelligent Selection and Computation Offloading. First, we partition the clients into different groups, where sub-aggregations of the clients models are performed at each cluster before the global aggregation. Second, we study the computing resources of the clients, and we embed in the proposed approach an intelligent offloading model, where the clients with high computational resources can assist and optimize the model of those struggling with limited resources. As such, both communication cost and computation resources are reduced and optimized. Finally, thorough experimental results are presented to support our findings and validate our model.
Training, Data privacy, Costs, Federated learning, Computational modeling, Boosting, Servers
AbdulRahman, Sawsan; Bouachir, Ouns; Otoum, Safa; and Mourad, Azzam, "Towards Boosting Federated Learning Convergence: A Computation Offloading & Clustering Approach" (2023). All Works. 6149.
Indexed in Scopus