Secure and efficient decentralized machine learning through group-based model aggregation

Brandon A. Mosqueda González, Omar Hasan, Wisnu Uriawan, Youakim Badr, Lionel Brunie

Research output: Contribution to journalArticlepeer-review

Abstract

In the domain of decentralized machine learning, enhancing privacy often comes at the cost of reduced efficiency or utility, and vice versa. Striking a balance between privacy, efficiency, and utility remains a challenge. In this paper, we present the Secure Group-Based Model Aggregation (SGBMA) framework for decentralized learning. SGBMA introduces a novel approach by dividing the set of participants into small groups and employing an efficient secure multiparty computation protocol to aggregate models within the groups. The adoption of a balanced binary tree topology of groups facilitates the seamless combination of models computed in the groups into a unified global model. At each training round, SGBMA achieves equal participation from each user in the global model, equivalent to federated learning. The privacy-efficiency balance can be adjusted with the size of the groups with no impact on model utility. By leveraging SGBMA, decentralized learning can be executed while ensuring privacy and making it applicable to large-scale scenarios. Our experiments show that SGBMA produces higher model utility for Independent and Identically Distributed data (IID) and comparable results as federated learning in non-IID.

Original languageEnglish (US)
JournalCluster Computing
DOIs
StateAccepted/In press - 2023

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Networks and Communications

Cite this