TY - GEN
T1 - Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses
AU - Yang, Yuxin
AU - Li, Qiang
AU - Jia, Jinyuan
AU - Hong, Yuan
AU - Wang, Binghui
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/12/9
Y1 - 2024/12/9
N2 - Federated graph learning (FedGL) is an emerging federated learning (FL) framework that extends FL to learn graph data from diverse sources without accessing the data. FL for non-graph data has shown to be vulnerable to backdoor attacks, which inject a shared backdoor trigger into the training data such that the trained backdoored FL model can predict the testing data containing the trigger as the attacker desires. However, FedGL against backdoor attacks is largely unexplored, and no effective defense exists. In this paper, we aim to address such significant deficiency. First, we propose an effective, stealthy, and persistent backdoor attack on FedGL. Our attack uses a subgraph as the trigger and designs an adaptive trigger generator that can derive the effective trigger location and shape for each graph. Our attack shows that empirical defenses are hard to detect/remove our generated triggers. To mitigate it, we further develop a certified defense for any backdoored FedGL model against the trigger with any shape at any location. Our defense involves carefully dividing a testing graph into multiple subgraphs and designing a majority vote-based ensemble classifier on these subgraphs. We then derive the deterministic certified robustness based on the ensemble classifier and prove its tightness. We extensively evaluate our attack and defense on six graph datasets. Our attack results show our attack can obtain > 90% backdoor accuracy in almost all datasets. Our defense results show, in certain cases, the certified accuracy for clean testing graphs against an arbitrary trigger with size 20 can be close to the normal accuracy under no attack, while there is a moderate gap in other cases. Source code is available at: https://github.com/Yuxin104/Opt-GDBA. The full report is at: https://arxiv.org/abs/2407.08935.
AB - Federated graph learning (FedGL) is an emerging federated learning (FL) framework that extends FL to learn graph data from diverse sources without accessing the data. FL for non-graph data has shown to be vulnerable to backdoor attacks, which inject a shared backdoor trigger into the training data such that the trained backdoored FL model can predict the testing data containing the trigger as the attacker desires. However, FedGL against backdoor attacks is largely unexplored, and no effective defense exists. In this paper, we aim to address such significant deficiency. First, we propose an effective, stealthy, and persistent backdoor attack on FedGL. Our attack uses a subgraph as the trigger and designs an adaptive trigger generator that can derive the effective trigger location and shape for each graph. Our attack shows that empirical defenses are hard to detect/remove our generated triggers. To mitigate it, we further develop a certified defense for any backdoored FedGL model against the trigger with any shape at any location. Our defense involves carefully dividing a testing graph into multiple subgraphs and designing a majority vote-based ensemble classifier on these subgraphs. We then derive the deterministic certified robustness based on the ensemble classifier and prove its tightness. We extensively evaluate our attack and defense on six graph datasets. Our attack results show our attack can obtain > 90% backdoor accuracy in almost all datasets. Our defense results show, in certain cases, the certified accuracy for clean testing graphs against an arbitrary trigger with size 20 can be close to the normal accuracy under no attack, while there is a moderate gap in other cases. Source code is available at: https://github.com/Yuxin104/Opt-GDBA. The full report is at: https://arxiv.org/abs/2407.08935.
UR - http://www.scopus.com/inward/record.url?scp=85213595874&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85213595874&partnerID=8YFLogxK
U2 - 10.1145/3658644.3690187
DO - 10.1145/3658644.3690187
M3 - Conference contribution
AN - SCOPUS:85213595874
T3 - CCS 2024 - Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security
SP - 2829
EP - 2843
BT - CCS 2024 - Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery, Inc
T2 - 31st ACM SIGSAC Conference on Computer and Communications Security, CCS 2024
Y2 - 14 October 2024 through 18 October 2024
ER -