TY - JOUR
T1 - Adaptive Federated Learning in Resource Constrained Edge Computing Systems
AU - Wang, Shiqiang
AU - Tuor, Tiffany
AU - Salonidis, Theodoros
AU - Leung, Kin K.
AU - Makaya, Christian
AU - He, Ting
AU - Chan, Kevin
N1 - Publisher Copyright:
© 1983-2012 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
AB - Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
UR - http://www.scopus.com/inward/record.url?scp=85065907659&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85065907659&partnerID=8YFLogxK
U2 - 10.1109/JSAC.2019.2904348
DO - 10.1109/JSAC.2019.2904348
M3 - Article
AN - SCOPUS:85065907659
SN - 0733-8716
VL - 37
SP - 1205
EP - 1221
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
IS - 6
M1 - 8664630
ER -