TY - JOUR
T1 - Training Set Cleansing of Backdoor Poisoning by Self-Supervised Representation Learning
AU - Wang, Hang
AU - Karami, Sahar
AU - Dia, Ousmane
AU - Ritter, Hippolyt
AU - Emamjomeh-Zadeh, Ehsan
AU - Chen, Jiahui
AU - Xiang, Zhen
AU - Miller, David J.
AU - Kesidis, George
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - A backdoor or Trojan attack is an important type of data poisoning attack against deep neural network (DNN) classifiers, wherein the training dataset is poisoned with a small number of samples that each possess the backdoor pattern (usually a pattern that is either imperceptible or innocuous) and which are mislabeled to the attacker's target class. When trained on a backdoor-poisoned dataset, a DNN behaves normally on most benign test samples but makes incorrect predictions to the target class when the test sample has the backdoor pattern incorporated (i.e., contains a backdoor trigger). Here we focus on image classification tasks and show that supervised training may build stronger association between the backdoor pattern and the associated target class than that between normal features and the true class of origin. By contrast, self-supervised representation learning ignores the labels of samples and learns a feature embedding based on images' semantic content. Using a feature embedding found by self-supervised representation learning, a data cleansing method, which combines sample filtering and relabeling, is developed. Experiments on CIFAR-10 benchmark datasets show that our method achieves state-of-the-art performance in mitigating backdoor attacks.
AB - A backdoor or Trojan attack is an important type of data poisoning attack against deep neural network (DNN) classifiers, wherein the training dataset is poisoned with a small number of samples that each possess the backdoor pattern (usually a pattern that is either imperceptible or innocuous) and which are mislabeled to the attacker's target class. When trained on a backdoor-poisoned dataset, a DNN behaves normally on most benign test samples but makes incorrect predictions to the target class when the test sample has the backdoor pattern incorporated (i.e., contains a backdoor trigger). Here we focus on image classification tasks and show that supervised training may build stronger association between the backdoor pattern and the associated target class than that between normal features and the true class of origin. By contrast, self-supervised representation learning ignores the labels of samples and learns a feature embedding based on images' semantic content. Using a feature embedding found by self-supervised representation learning, a data cleansing method, which combines sample filtering and relabeling, is developed. Experiments on CIFAR-10 benchmark datasets show that our method achieves state-of-the-art performance in mitigating backdoor attacks.
UR - http://www.scopus.com/inward/record.url?scp=85180408772&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85180408772&partnerID=8YFLogxK
U2 - 10.1109/ICASSP49357.2023.10097244
DO - 10.1109/ICASSP49357.2023.10097244
M3 - Conference article
AN - SCOPUS:85180408772
SN - 1520-6149
JO - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
JF - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
T2 - 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Y2 - 4 June 2023 through 10 June 2023
ER -