FedImp: Enhancing Federated Learning Convergence with Impurity-Based Weighting

Hai Anh Tran, Cuong Ta, Truong X. Tran

Research output: Contribution to journalArticlepeer-review

Abstract

Federated Learning (FL) is a collaborative paradigm that enables multiple devices to train a global model while preserving local data privacy. A major challenge in FL is the non-Independent and Identically Distributed (non-IID) nature of data across devices, which hinders training efficiency and slows convergence. To tackle this, we propose Federated Impurity Weighting (FedImp), a novel algorithm that quantifies each device's contribution based on the informational content of its local data. These contributions are normalized to compute distinct aggregation weights for the global model update. Extensive experiments on EMNIST and CIFAR-10 datasets show that FedImp significantly improves convergence speed, reducing communication rounds by up to 64.4%, 27.8%, and 66.7% on EMNIST, and 44.2%, 44%, and 25.6% on CIFAR-10 compared to FedAvg, FedProx, and FedAdp, respectively. Under highly imbalanced data distributions, FedImp outperforms all baselines and achieves the highest accuracy. Overall, FedImp offers an effective solution to enhance FL efficiency in non-IID settings.

Original languageEnglish (US)
JournalIEEE Transactions on Artificial Intelligence
DOIs
StateAccepted/In press - 2025

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'FedImp: Enhancing Federated Learning Convergence with Impurity-Based Weighting'. Together they form a unique fingerprint.

Cite this