TY - GEN
T1 - BadEncoder
T2 - 43rd IEEE Symposium on Security and Privacy, SP 2022
AU - Jia, Jinyuan
AU - Liu, Yupei
AU - Gong, Neil Zhenqiang
N1 - Funding Information:
We thank the anonymous reviewers and our shepherd Fabio Pierazzi for constructive comments. This work was supported by NSF under Grant No. 1937786 and the Army Research Office under Grant No. W911NF2110182.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Self-supervised learning in computer vision aims to pre-train an image encoder using a large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can then be used as a feature extractor to build downstream classifiers for many downstream tasks with a small amount of or no labeled training data. In this work, we propose BadEncoder, the first backdoor attack to self-supervised learning. In particular, our BadEncoder injects backdoors into a pre-trained image encoder such that the downstream classifiers built based on the backdoored image encoder for different downstream tasks simultaneously inherit the backdoor behavior. We formulate our BadEncoder as an optimization problem and we propose a gradient descent based method to solve it, which produces a backdoored image encoder from a clean one. Our extensive empirical evaluation results on multiple datasets show that our BadEncoder achieves high attack success rates while preserving the accuracy of the downstream classifiers. We also show the effectiveness of BadEncoder using two publicly available, real-world image encoders, i.e., Google's image encoder pre-trained on ImageNet and OpenAI's Contrastive Language-Image Pre-training (CLIP) image encoder pre-trained on 400 million (image, text) pairs collected from the Internet. Moreover, we consider defenses including Neural Cleanse and MNTD (empirical defenses) as well as PatchGuard (a provable defense). Our results show that these defenses are insufficient to defend against BadEncoder, highlighting the needs for new defenses against our BadEncoder. Our code is publicly available at: https://github.com/jjy1994/BadEncoder.
AB - Self-supervised learning in computer vision aims to pre-train an image encoder using a large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can then be used as a feature extractor to build downstream classifiers for many downstream tasks with a small amount of or no labeled training data. In this work, we propose BadEncoder, the first backdoor attack to self-supervised learning. In particular, our BadEncoder injects backdoors into a pre-trained image encoder such that the downstream classifiers built based on the backdoored image encoder for different downstream tasks simultaneously inherit the backdoor behavior. We formulate our BadEncoder as an optimization problem and we propose a gradient descent based method to solve it, which produces a backdoored image encoder from a clean one. Our extensive empirical evaluation results on multiple datasets show that our BadEncoder achieves high attack success rates while preserving the accuracy of the downstream classifiers. We also show the effectiveness of BadEncoder using two publicly available, real-world image encoders, i.e., Google's image encoder pre-trained on ImageNet and OpenAI's Contrastive Language-Image Pre-training (CLIP) image encoder pre-trained on 400 million (image, text) pairs collected from the Internet. Moreover, we consider defenses including Neural Cleanse and MNTD (empirical defenses) as well as PatchGuard (a provable defense). Our results show that these defenses are insufficient to defend against BadEncoder, highlighting the needs for new defenses against our BadEncoder. Our code is publicly available at: https://github.com/jjy1994/BadEncoder.
UR - http://www.scopus.com/inward/record.url?scp=85135931269&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85135931269&partnerID=8YFLogxK
U2 - 10.1109/SP46214.2022.9833644
DO - 10.1109/SP46214.2022.9833644
M3 - Conference contribution
AN - SCOPUS:85135931269
T3 - Proceedings - IEEE Symposium on Security and Privacy
SP - 2043
EP - 2059
BT - Proceedings - 43rd IEEE Symposium on Security and Privacy, SP 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 May 2022 through 26 May 2022
ER -