Backdoor attacks to graph neural networks

Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

Research output: Chapter in Book/Report/Conference proceedingConference contribution

76 Scopus citations

Abstract

In this work, we propose the first backdoor attack to graph neural networks (GNN). Specifically, we propose a subgraph based backdoor attack to GNN for graph classification. In our backdoor attack, a GNN classifier predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph. Our empirical results on three real-world graph datasets show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs. Moreover, we generalize a randomized smoothing based certified defense to defend against our backdoor attacks. Our empirical results show that the defense is effective in some cases but ineffective in other cases, highlighting the needs of new defenses for our backdoor attacks.

Original languageEnglish (US)
Title of host publicationSACMAT 2021 - Proceedings of the 26th ACM Symposium on Access Control Models and Technologies
PublisherAssociation for Computing Machinery
Pages15-26
Number of pages12
ISBN (Electronic)9781450383653
DOIs
StatePublished - Jun 11 2021
Event26th ACM Symposium on Access Control Models and Technologies, SACMAT 2021 - Virtual, Online, Spain
Duration: Jun 16 2021Jun 18 2021

Publication series

NameProceedings of ACM Symposium on Access Control Models and Technologies, SACMAT

Conference

Conference26th ACM Symposium on Access Control Models and Technologies, SACMAT 2021
Country/TerritorySpain
CityVirtual, Online
Period6/16/216/18/21

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Networks and Communications
  • Safety, Risk, Reliability and Quality
  • Information Systems

Cite this