TY - GEN
T1 - Unlearning Backdoor Attacks in Federated Learning
AU - Wu, Chen
AU - Zhu, Sencun
AU - Mitra, Prasenjit
AU - Wang, Wei
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning systems are constantly under the looming threat of backdoor attacks. Despite significant progress in mitigating such attacks, the challenge of effectively removing a potential attacker's influence from the trained global model remains unresolved. In this paper, we present a novel federated unlearning method that is suitable for backdoor removal. By leveraging historical updates subtraction and knowledge distillation, our approach can maintain the models's performance while completely removing the backdoors implanted by the attacker from the model. It can be seamlessly applied to various types of neural networks and does not require clients' participation in the unlearning process. Through experiments on diverse computer vision and natural language processing datasets, we demonstrate the effectiveness and efficiency of our proposed method. The promising results obtained validate the potential of our approach to bolster the security of federated learning systems against backdoor threats.
AB - Federated learning systems are constantly under the looming threat of backdoor attacks. Despite significant progress in mitigating such attacks, the challenge of effectively removing a potential attacker's influence from the trained global model remains unresolved. In this paper, we present a novel federated unlearning method that is suitable for backdoor removal. By leveraging historical updates subtraction and knowledge distillation, our approach can maintain the models's performance while completely removing the backdoors implanted by the attacker from the model. It can be seamlessly applied to various types of neural networks and does not require clients' participation in the unlearning process. Through experiments on diverse computer vision and natural language processing datasets, we demonstrate the effectiveness and efficiency of our proposed method. The promising results obtained validate the potential of our approach to bolster the security of federated learning systems against backdoor threats.
UR - http://www.scopus.com/inward/record.url?scp=85210557538&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85210557538&partnerID=8YFLogxK
U2 - 10.1109/CNS62487.2024.10735680
DO - 10.1109/CNS62487.2024.10735680
M3 - Conference contribution
AN - SCOPUS:85210557538
T3 - 2024 IEEE Conference on Communications and Network Security, CNS 2024
BT - 2024 IEEE Conference on Communications and Network Security, CNS 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Conference on Communications and Network Security, CNS 2024
Y2 - 30 September 2024 through 3 October 2024
ER -