TY - GEN
T1 - Data Poisoning Based Backdoor Attacks to Contrastive Learning
AU - Zhang, Jinghuai
AU - Liu, Hongbin
AU - Jia, Jinyuan
AU - Gong, Neil Zhenqiang
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Contrastive learning (CL) pretrains general-purpose encoders using an unlabeled pretraining dataset, which consists of images or image-text pairs. CL is vulnera-ble to data poisoning based backdoor attacks (DPBAs), in which an attacker injects poisoned inputs into the pretraining dataset so the encoder is backdoored. However, existing DPBAs achieve limited effectiveness. In this work, we take the first step to analyze the limitations of existing backdoor attacks and propose new DPBAs called CorruptEncoder to CL. CorruptEncoder introduces a new attack strategy to create poisoned inputs and uses a theory-guided method to maximize attack effectiveness. Our experiments show that CorruptEncoder substantially outperforms existing DPBAs. In particular, CorruptEncoder is the first DPBA that achieves more than 90% attack success rates with only a few (3) reference images and a small poisoning ratio (0.5%). Moreover, we also propose a defense, called localized cropping, to defend against DPBAs. Our results show that our defense can reduce the effectiveness of DPBAs, but it sacrifices the utility of the encoder, highlighting the need for new defenses.
AB - Contrastive learning (CL) pretrains general-purpose encoders using an unlabeled pretraining dataset, which consists of images or image-text pairs. CL is vulnera-ble to data poisoning based backdoor attacks (DPBAs), in which an attacker injects poisoned inputs into the pretraining dataset so the encoder is backdoored. However, existing DPBAs achieve limited effectiveness. In this work, we take the first step to analyze the limitations of existing backdoor attacks and propose new DPBAs called CorruptEncoder to CL. CorruptEncoder introduces a new attack strategy to create poisoned inputs and uses a theory-guided method to maximize attack effectiveness. Our experiments show that CorruptEncoder substantially outperforms existing DPBAs. In particular, CorruptEncoder is the first DPBA that achieves more than 90% attack success rates with only a few (3) reference images and a small poisoning ratio (0.5%). Moreover, we also propose a defense, called localized cropping, to defend against DPBAs. Our results show that our defense can reduce the effectiveness of DPBAs, but it sacrifices the utility of the encoder, highlighting the need for new defenses.
UR - https://www.scopus.com/pages/publications/85211506421
UR - https://www.scopus.com/pages/publications/85211506421#tab=citedBy
U2 - 10.1109/CVPR52733.2024.02299
DO - 10.1109/CVPR52733.2024.02299
M3 - Conference contribution
AN - SCOPUS:85211506421
SN - 9798350353006
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 24357
EP - 24366
BT - Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
PB - IEEE Computer Society
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
Y2 - 16 June 2024 through 22 June 2024
ER -