TY - JOUR
T1 - Neural network laundering
T2 - Removing black-box backdoor watermarks from deep neural networks
AU - Aiken, William
AU - Kim, Hyoungshick
AU - Woo, Simon
AU - Ryoo, Jungwoo
N1 - Funding Information:
This work was supported by the ICT R&D Programs (no. 2017-0-00545) and the ITRC Support Program (IITP-2019-2015-0-00403). The authors would like to thank all the anonymous reviewers for their valuable feedback.
Publisher Copyright:
© 2021 Elsevier Ltd
PY - 2021/7
Y1 - 2021/7
N2 - Creating a state-of-the-art deep-learning system requires vast amounts of data, expertise, and hardware, yet research into copyright protection for neural networks has been limited. One of the main methods for achieving such protection involves relying on the susceptibility of neural networks to backdoor attacks in order to inject a watermark into the network, but the robustness of these tactics has been primarily evaluated against pruning, fine-tuning, and model inversion attacks. In this work, we propose an offensive neural network “laundering” algorithm to remove these backdoor watermarks from neural networks even when the adversary has no prior knowledge of the structure of the watermark. We can effectively remove watermarks used for recent defense or copyright protection mechanisms while retaining test accuracies on the target task above 97% and 80% for both MNIST and CIFAR-10, respectively. For all watermarking methods addressed in this paper, we find that the robustness of the watermark is significantly weaker than the original claims. We also demonstrate the feasibility of our algorithm in more complex tasks as well as in more realistic scenarios where the adversary can carry out efficient laundering attacks using less than 1% of the original training set size, demonstrating that existing watermark-embedding procedures are not sufficient to reach their claims.
AB - Creating a state-of-the-art deep-learning system requires vast amounts of data, expertise, and hardware, yet research into copyright protection for neural networks has been limited. One of the main methods for achieving such protection involves relying on the susceptibility of neural networks to backdoor attacks in order to inject a watermark into the network, but the robustness of these tactics has been primarily evaluated against pruning, fine-tuning, and model inversion attacks. In this work, we propose an offensive neural network “laundering” algorithm to remove these backdoor watermarks from neural networks even when the adversary has no prior knowledge of the structure of the watermark. We can effectively remove watermarks used for recent defense or copyright protection mechanisms while retaining test accuracies on the target task above 97% and 80% for both MNIST and CIFAR-10, respectively. For all watermarking methods addressed in this paper, we find that the robustness of the watermark is significantly weaker than the original claims. We also demonstrate the feasibility of our algorithm in more complex tasks as well as in more realistic scenarios where the adversary can carry out efficient laundering attacks using less than 1% of the original training set size, demonstrating that existing watermark-embedding procedures are not sufficient to reach their claims.
UR - http://www.scopus.com/inward/record.url?scp=85107641984&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85107641984&partnerID=8YFLogxK
U2 - 10.1016/j.cose.2021.102277
DO - 10.1016/j.cose.2021.102277
M3 - Article
AN - SCOPUS:85107641984
SN - 0167-4048
VL - 106
JO - Computers and Security
JF - Computers and Security
M1 - 102277
ER -