Communication-Efficient Adaptive Federated Learning

Yujia Wang, Lu Lin, Jinghui Chen

Research output: Contribution to journalConference articlepeer-review

58 Scopus citations

Abstract

Federated learning is a machine learning training paradigm that enables clients to jointly train models without sharing their own localized data. However, the implementation of federated learning in practice still faces numerous challenges, such as the large communication overhead due to the repetitive server-client synchronization and the lack of adaptivity by SGD-based model updates. Despite that various methods have been proposed for reducing the communication cost by gradient compression or quantization, and the federated versions of adaptive optimizers such as FedAdam are proposed to add more adaptivity, the current federated learning framework still cannot solve the aforementioned challenges all at once. In this paper, we propose a novel communication-efficient adaptive federated learning method (FedCAMS) with theoretical convergence guarantees. We show that in the nonconvex stochastic optimization setting, our proposed FedCAMS achieves the same convergence rate of (Eqaution presented) as its non-compressed counterparts. Extensive experiments on various benchmarks verify our theoretical analysis.

Original languageEnglish (US)
Pages (from-to)22802-22838
Number of pages37
JournalProceedings of Machine Learning Research
Volume162
StatePublished - 2022
Event39th International Conference on Machine Learning, ICML 2022 - Baltimore, United States
Duration: Jul 17 2022Jul 23 2022

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Communication-Efficient Adaptive Federated Learning'. Together they form a unique fingerprint.

Cite this