TY - JOUR
T1 - Secure and efficient decentralized machine learning through group-based model aggregation
AU - Mosqueda González, Brandon A.
AU - Hasan, Omar
AU - Uriawan, Wisnu
AU - Badr, Youakim
AU - Brunie, Lionel
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
PY - 2024/7
Y1 - 2024/7
N2 - In the domain of decentralized machine learning, enhancing privacy often comes at the cost of reduced efficiency or utility, and vice versa. Striking a balance between privacy, efficiency, and utility remains a challenge. In this paper, we present the Secure Group-Based Model Aggregation (SGBMA) framework for decentralized learning. SGBMA introduces a novel approach by dividing the set of participants into small groups and employing an efficient secure multiparty computation protocol to aggregate models within the groups. The adoption of a balanced binary tree topology of groups facilitates the seamless combination of models computed in the groups into a unified global model. At each training round, SGBMA achieves equal participation from each user in the global model, equivalent to federated learning. The privacy-efficiency balance can be adjusted with the size of the groups with no impact on model utility. By leveraging SGBMA, decentralized learning can be executed while ensuring privacy and making it applicable to large-scale scenarios. Our experiments show that SGBMA produces higher model utility for Independent and Identically Distributed data (IID) and comparable results as federated learning in non-IID.
AB - In the domain of decentralized machine learning, enhancing privacy often comes at the cost of reduced efficiency or utility, and vice versa. Striking a balance between privacy, efficiency, and utility remains a challenge. In this paper, we present the Secure Group-Based Model Aggregation (SGBMA) framework for decentralized learning. SGBMA introduces a novel approach by dividing the set of participants into small groups and employing an efficient secure multiparty computation protocol to aggregate models within the groups. The adoption of a balanced binary tree topology of groups facilitates the seamless combination of models computed in the groups into a unified global model. At each training round, SGBMA achieves equal participation from each user in the global model, equivalent to federated learning. The privacy-efficiency balance can be adjusted with the size of the groups with no impact on model utility. By leveraging SGBMA, decentralized learning can be executed while ensuring privacy and making it applicable to large-scale scenarios. Our experiments show that SGBMA produces higher model utility for Independent and Identically Distributed data (IID) and comparable results as federated learning in non-IID.
UR - http://www.scopus.com/inward/record.url?scp=85176392506&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85176392506&partnerID=8YFLogxK
U2 - 10.1007/s10586-023-04174-9
DO - 10.1007/s10586-023-04174-9
M3 - Article
AN - SCOPUS:85176392506
SN - 1386-7857
VL - 27
SP - 3911
EP - 3925
JO - Cluster Computing
JF - Cluster Computing
IS - 4
ER -