TY - GEN
T1 - Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
AU - Zhong, Haoti
AU - Liao, Cong
AU - Squicciarini, Anna Cinzia
AU - Zhu, Sencun
AU - Miller, David
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/3/16
Y1 - 2020/3/16
N2 - Deep learning models have consistently outperformed traditional machine learning models in various classification tasks, including image classification. As such, they have become increasingly prevalent in many real world applications including those where security is of great concern. Such popularity, however, may attract attackers to exploit the vulnerabilities of the deployed deep learning models and launch attacks against security-sensitive applications. In this paper, we focus on a specific type of data poisoning attack, which we refer to as a backdoor injection attack. The main goal of the adversary performing such attack is to generate and inject a backdoor into a deep learning model that can be triggered to recognize certain embedded patterns with a target label of the attacker's choice. Additionally, a backdoor injection attack should occur in a stealthy manner, without undermining the efficacy of the victim model. Specifically, we propose two approaches for generating a backdoor that is hardly perceptible yet effective in poisoning the model. We consider two attack settings, with backdoor injection carried out either before model training or during model updating. We carry out extensive experimental evaluations under various assumptions on the adversary model, and demonstrate that such attacks can be effective and achieve a high attack success rate (above 90%) at a small cost of model accuracy loss with a small injection rate, even under the weakest assumption wherein the adversary has no knowledge either of the original training data or the classifier model.
AB - Deep learning models have consistently outperformed traditional machine learning models in various classification tasks, including image classification. As such, they have become increasingly prevalent in many real world applications including those where security is of great concern. Such popularity, however, may attract attackers to exploit the vulnerabilities of the deployed deep learning models and launch attacks against security-sensitive applications. In this paper, we focus on a specific type of data poisoning attack, which we refer to as a backdoor injection attack. The main goal of the adversary performing such attack is to generate and inject a backdoor into a deep learning model that can be triggered to recognize certain embedded patterns with a target label of the attacker's choice. Additionally, a backdoor injection attack should occur in a stealthy manner, without undermining the efficacy of the victim model. Specifically, we propose two approaches for generating a backdoor that is hardly perceptible yet effective in poisoning the model. We consider two attack settings, with backdoor injection carried out either before model training or during model updating. We carry out extensive experimental evaluations under various assumptions on the adversary model, and demonstrate that such attacks can be effective and achieve a high attack success rate (above 90%) at a small cost of model accuracy loss with a small injection rate, even under the weakest assumption wherein the adversary has no knowledge either of the original training data or the classifier model.
UR - http://www.scopus.com/inward/record.url?scp=85083398329&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083398329&partnerID=8YFLogxK
U2 - 10.1145/3374664.3375751
DO - 10.1145/3374664.3375751
M3 - Conference contribution
AN - SCOPUS:85083398329
T3 - CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy
SP - 97
EP - 108
BT - CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy
PB - Association for Computing Machinery, Inc
T2 - 10th ACM Conference on Data and Application Security and Privacy, CODASPY 2020
Y2 - 16 March 2020 through 18 March 2020
ER -