TY - GEN
T1 - Efficient Federated Learning Convergence with Epoch Adaptation
AU - Nguyen, Huy Hieu
AU - Hoang, Nam Thang
AU - Tran, Hai Anh
AU - Mandal, Tulika
AU - Annareddy, Ruthvik
AU - Choudhary, Prithvi
AU - Tran, Truong X.
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated Learning (FL) is well-suited for the Internet of Things and Cloud Computing due to its ability to preserve data privacy, handle large-scale deployments, work with resource-constrained devices, and integrate with edge computing architectures. However, practical FL often encounters heterogeneity problems that stem from the different nature of device resources and user usage patterns. These issues can slow down model convergence, requiring the model to undergo more communication rounds to reach final convergence and causing the training time of a communication round to be prolonged. In this paper, we propose a new Epoch Adaptation Mechanism for Efficient Convergence framework for the FL mechanism to address these heterogeneities. By calculating a suitable number of local epochs for each client based on its computation and communication time in a training round, our method can mitigate the waiting time caused by system heterogeneity by increasing the number of epochs in faster clients. This strategy helps accelerate model convergence without extending the training time in each round. In addition, a double-sided mechanism is applied to our framework to prevent the possibility of overfitting during the training stage. Experimental results show that our framework can boost the convergence of the global model in statistical heterogeneity by up to 80 % in EMNIST dataset and 35 % in CIFAR-10 dataset.
AB - Federated Learning (FL) is well-suited for the Internet of Things and Cloud Computing due to its ability to preserve data privacy, handle large-scale deployments, work with resource-constrained devices, and integrate with edge computing architectures. However, practical FL often encounters heterogeneity problems that stem from the different nature of device resources and user usage patterns. These issues can slow down model convergence, requiring the model to undergo more communication rounds to reach final convergence and causing the training time of a communication round to be prolonged. In this paper, we propose a new Epoch Adaptation Mechanism for Efficient Convergence framework for the FL mechanism to address these heterogeneities. By calculating a suitable number of local epochs for each client based on its computation and communication time in a training round, our method can mitigate the waiting time caused by system heterogeneity by increasing the number of epochs in faster clients. This strategy helps accelerate model convergence without extending the training time in each round. In addition, a double-sided mechanism is applied to our framework to prevent the possibility of overfitting during the training stage. Experimental results show that our framework can boost the convergence of the global model in statistical heterogeneity by up to 80 % in EMNIST dataset and 35 % in CIFAR-10 dataset.
UR - https://www.scopus.com/pages/publications/105017859886
UR - https://www.scopus.com/inward/citedby.url?scp=105017859886&partnerID=8YFLogxK
U2 - 10.1109/IRI66576.2025.00041
DO - 10.1109/IRI66576.2025.00041
M3 - Conference contribution
AN - SCOPUS:105017859886
T3 - Proceedings - 2025 IEEE International Conference on Information Reuse and Integration and Data Science, IRI 2025
SP - 178
EP - 183
BT - Proceedings - 2025 IEEE International Conference on Information Reuse and Integration and Data Science, IRI 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th IEEE International Conference on Information Reuse and Integration and Data Science, IRI 2025
Y2 - 6 August 2025 through 8 August 2025
ER -