TY - JOUR
T1 - Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process
AU - Wang, Shiqiang
AU - Urgaonkar, Rahul
AU - Zafer, Murtaza
AU - He, Ting
AU - Chan, Kevin
AU - Leung, Kin K.
N1 - Funding Information:
Manuscript received July 22, 2018; revised March 4, 2019; accepted May 7, 2019; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor J. Llorca. Date of publication May 31, 2019; date of current version June 14, 2019. This work was supported in part by the U.S. Army Research Laboratory and the U.K. Ministry of Defence under Grant W911NF-06-3-0001 and Grant W911NF-16-3-0001. A preliminary version of this paper was presented at IFIP Networking 2015 [1]. (Corresponding author: Shiqiang Wang.) S. Wang is with the IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 USA (e-mail: [email protected]).
Publisher Copyright:
© 1993-2012 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - In mobile edge computing, local edge servers can host cloud-based services, which reduces network overhead and latency but requires service migrations as users move to new locations. It is challenging to make migration decisions optimally because of the uncertainty in such a dynamic cloud environment. In this paper, we formulate the service migration problem as a Markov decision process (MDP). Our formulation captures general cost models and provides a mathematical framework to design optimal service migration policies. In order to overcome the complexity associated with computing the optimal policy, we approximate the underlying state space by the distance between the user and service locations. We show that the resulting MDP is exact for the uniform 1-D user mobility, while it provides a close approximation for uniform 2-D mobility with a constant additive error. We also propose a new algorithm and a numerical technique for computing the optimal solution, which is significantly faster than traditional methods based on the standard value or policy iteration. We illustrate the application of our solution in practical scenarios where many theoretical assumptions are relaxed. Our evaluations based on real-world mobility traces of San Francisco taxis show the superior performance of the proposed solution compared to baseline solutions.
AB - In mobile edge computing, local edge servers can host cloud-based services, which reduces network overhead and latency but requires service migrations as users move to new locations. It is challenging to make migration decisions optimally because of the uncertainty in such a dynamic cloud environment. In this paper, we formulate the service migration problem as a Markov decision process (MDP). Our formulation captures general cost models and provides a mathematical framework to design optimal service migration policies. In order to overcome the complexity associated with computing the optimal policy, we approximate the underlying state space by the distance between the user and service locations. We show that the resulting MDP is exact for the uniform 1-D user mobility, while it provides a close approximation for uniform 2-D mobility with a constant additive error. We also propose a new algorithm and a numerical technique for computing the optimal solution, which is significantly faster than traditional methods based on the standard value or policy iteration. We illustrate the application of our solution in practical scenarios where many theoretical assumptions are relaxed. Our evaluations based on real-world mobility traces of San Francisco taxis show the superior performance of the proposed solution compared to baseline solutions.
UR - http://www.scopus.com/inward/record.url?scp=85067560064&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85067560064&partnerID=8YFLogxK
U2 - 10.1109/TNET.2019.2916577
DO - 10.1109/TNET.2019.2916577
M3 - Article
AN - SCOPUS:85067560064
SN - 1063-6692
VL - 27
SP - 1272
EP - 1288
JO - IEEE/ACM Transactions on Networking
JF - IEEE/ACM Transactions on Networking
IS - 3
M1 - 8727722
ER -