TY - GEN
T1 - Enhancing Robustness of Graph Convolutional Networks via Dropping Graph Connections
AU - Chen, Lingwei
AU - Li, Xiaoting
AU - Wu, Dinghao
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Graph convolutional networks (GCNs) have emerged as one of the most popular neural networks for a variety of tasks over graphs. Despite their remarkable learning and inference ability, GCNs are still vulnerable to adversarial attacks that imperceptibly perturb graph structures and node features to degrade the performance of GCNs, which poses serious threats to the real-world applications. Inspired by the observations from recent studies suggesting that edge manipulations play a key role in graph adversarial attacks, in this paper, we take those attack behaviors into consideration and design a biased graph-sampling scheme to drop graph connections such that random, sparse and deformed subgraphs are constructed for training and inference. This method yields a significant regularization on graph learning, alleviates the sensitivity to edge manipulations, and thus enhances the robustness of GCNs. We evaluate the performance of our proposed method, while the experimental results validate its effectiveness against adversarial attacks.
AB - Graph convolutional networks (GCNs) have emerged as one of the most popular neural networks for a variety of tasks over graphs. Despite their remarkable learning and inference ability, GCNs are still vulnerable to adversarial attacks that imperceptibly perturb graph structures and node features to degrade the performance of GCNs, which poses serious threats to the real-world applications. Inspired by the observations from recent studies suggesting that edge manipulations play a key role in graph adversarial attacks, in this paper, we take those attack behaviors into consideration and design a biased graph-sampling scheme to drop graph connections such that random, sparse and deformed subgraphs are constructed for training and inference. This method yields a significant regularization on graph learning, alleviates the sensitivity to edge manipulations, and thus enhances the robustness of GCNs. We evaluate the performance of our proposed method, while the experimental results validate its effectiveness against adversarial attacks.
UR - http://www.scopus.com/inward/record.url?scp=85103269526&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85103269526&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-67664-3_25
DO - 10.1007/978-3-030-67664-3_25
M3 - Conference contribution
AN - SCOPUS:85103269526
SN - 9783030676636
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 412
EP - 428
BT - Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2020, Proceedings
A2 - Hutter, Frank
A2 - Kersting, Kristian
A2 - Lijffijt, Jefrey
A2 - Valera, Isabel
PB - Springer Science and Business Media Deutschland GmbH
T2 - European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2020
Y2 - 14 September 2020 through 18 September 2020
ER -