Neural network laundering: Removing black-box backdoor watermarks from deep neural networks

William Aiken, Hyoungshick Kim, Simon Woo, Jungwoo Ryoo

Research output: Contribution to journalArticlepeer-review

26 Scopus citations


Creating a state-of-the-art deep-learning system requires vast amounts of data, expertise, and hardware, yet research into copyright protection for neural networks has been limited. One of the main methods for achieving such protection involves relying on the susceptibility of neural networks to backdoor attacks in order to inject a watermark into the network, but the robustness of these tactics has been primarily evaluated against pruning, fine-tuning, and model inversion attacks. In this work, we propose an offensive neural network “laundering” algorithm to remove these backdoor watermarks from neural networks even when the adversary has no prior knowledge of the structure of the watermark. We can effectively remove watermarks used for recent defense or copyright protection mechanisms while retaining test accuracies on the target task above 97% and 80% for both MNIST and CIFAR-10, respectively. For all watermarking methods addressed in this paper, we find that the robustness of the watermark is significantly weaker than the original claims. We also demonstrate the feasibility of our algorithm in more complex tasks as well as in more realistic scenarios where the adversary can carry out efficient laundering attacks using less than 1% of the original training set size, demonstrating that existing watermark-embedding procedures are not sufficient to reach their claims.

Original languageEnglish (US)
Article number102277
JournalComputers and Security
StatePublished - Jul 2021

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Law


Dive into the research topics of 'Neural network laundering: Removing black-box backdoor watermarks from deep neural networks'. Together they form a unique fingerprint.

Cite this