TY - GEN
T1 - Server-based manipulation attacks against machine learning models
AU - Liao, Cong
AU - Zhu, Sencun
AU - Zhong, Haoti
AU - Squicciarini, Anna
N1 - Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/3/13
Y1 - 2018/3/13
N2 - Machine learning approaches have been increasingly applied to various applications for data analytics (e.g. spam filtering, image classification). Further, with the growing adoption of cloud computing, various cloud services have provided an efficient way for users to train, store or deploy machine learning algorithms in an easy-to-use manner. However, the models deployed in the cloud may be exposed to potential malicious attacks launched at the server side. Attackers with access to the server can stealthily manipulate a machine learning model so as to enable misclassification or introduce bias. In this work, we study the problem of manipulation attacks as they occur at the server side. We consider not only traditional supervised learning models but also state-of-the-art deep learning models. In particular, a simple but effective gradient descent based approach is presented to exploit Logistic Regression (LR) and Convolutional Neural Networks (CNN)[16] models. We evaluate manipulation attacks against machine learning or deep learning systems using both Enron email text and MINIST image dataset[17]. Experimental results have demonstrated such attacks can manipulate the model that allows malicious samples to evade detection easily without compromising the overall performance of the systems.
AB - Machine learning approaches have been increasingly applied to various applications for data analytics (e.g. spam filtering, image classification). Further, with the growing adoption of cloud computing, various cloud services have provided an efficient way for users to train, store or deploy machine learning algorithms in an easy-to-use manner. However, the models deployed in the cloud may be exposed to potential malicious attacks launched at the server side. Attackers with access to the server can stealthily manipulate a machine learning model so as to enable misclassification or introduce bias. In this work, we study the problem of manipulation attacks as they occur at the server side. We consider not only traditional supervised learning models but also state-of-the-art deep learning models. In particular, a simple but effective gradient descent based approach is presented to exploit Logistic Regression (LR) and Convolutional Neural Networks (CNN)[16] models. We evaluate manipulation attacks against machine learning or deep learning systems using both Enron email text and MINIST image dataset[17]. Experimental results have demonstrated such attacks can manipulate the model that allows malicious samples to evade detection easily without compromising the overall performance of the systems.
UR - http://www.scopus.com/inward/record.url?scp=85052017691&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052017691&partnerID=8YFLogxK
U2 - 10.1145/3176258.3176321
DO - 10.1145/3176258.3176321
M3 - Conference contribution
AN - SCOPUS:85052017691
T3 - CODASPY 2018 - Proceedings of the 8th ACM Conference on Data and Application Security and Privacy
SP - 2
EP - 34
BT - CODASPY 2018 - Proceedings of the 8th ACM Conference on Data and Application Security and Privacy
PB - Association for Computing Machinery, Inc
T2 - 8th ACM Conference on Data and Application Security and Privacy, CODASPY 2018
Y2 - 19 March 2018 through 21 March 2018
ER -