Enhancing Communication Efficiency in Fl With Adaptive Gradient Quantization and Communication Frequency Optimization
Document Type
Conference Proceeding
Source of Publication
IEEE International Conference on Communications
Publication Date
9-26-2025
Abstract
Federated Learning (FL) enables participant devices to collaboratively train deep learning models without sharing their data with the server or other devices, effectively addressing data privacy and computational concerns. however, FL faces a major bottleneck due to high communication overhead from frequent model updates between devices and the server, limiting deployment in resource-constrained wireless networks. In this paper, we propose a three-fold strategy: firstly, an Adaptive Feature-Elimination Strategy to drop less important features while retaining high-value ones; secondly, Adaptive Gradient Innovation and Error Sensitivity-Based Quantization, which dynamically adjusts the quantization level for innovative gradient compression; and thirdly, Communication Frequency Optimization to enhance communication efficiency. We evaluated our proposed model's performance through extensive experiments, assessing accuracy, loss, and convergence compared to baseline techniques. The results show that our model achieves high communication efficiency in the framework while maintaining accuracy.
DOI Link
ISBN
[9798331505219]
ISSN
Publisher
IEEE
First Page
1201
Last Page
1206
Disciplines
Computer Sciences
Keywords
Artificial intellegence, Communication Efficiency, data quality, Federated Learning, privacy, Quantization
Scopus ID
Recommended Citation
Tariq, Asadullah; Qayyum, Tariq; Serhani, Mohamed Adel; Sallabi, Farag M.; Taleb, Ikbal; and Barka, Ezedin S., "Enhancing Communication Efficiency in Fl With Adaptive Gradient Quantization and Communication Frequency Optimization" (2025). All Works. 7587.
https://zuscholars.zu.ac.ae/works/7587
Indexed in Scopus
yes
Open Access
no