TY - GEN
T1 - PROVABLY ROBUST EXPLAINABLE GRAPH NEURAL NETWORKS AGAINST GRAPH PERTURBATION ATTACKS
AU - Li, Jiate
AU - Pang, Meng
AU - Dong, Yun
AU - Jia, Jinyuan
AU - Wang, Binghui
N1 - Publisher Copyright:
© 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
PY - 2025
Y1 - 2025
N2 - Explainable Graph Neural Networks (XGNNs) have garnered increasing attention for enhancing the transparency of Graph Neural Networks (GNNs), which are the leading methods for learning from graph-structured data. While existing XGNNs primarily focus on improving explanation quality, their robustness under adversarial attacks remains largely unexplored. Recent studies have shown that even minor perturbations to graph structure can significantly alter the explanation outcomes of XGNNs, posing serious risks in safety-critical applications such as drug discovery. In this paper, we take the first step toward addressing this challenge by introducing XGNNCert, the first provably robust XGNN. XGNNCert offers formal guarantees that the explanation results will remain consistent, even under worst-case graph perturbation attacks, as long as the number of altered edges is within a bounded limit. Importantly, this robustness is achieved without compromising the original GNN's predictive performance. Evaluation results on multiple graph datasets and GNN explainers show the effectiveness of XGNNCert. Source code is available at https://github.com/JetRichardLee/XGNNCert.
AB - Explainable Graph Neural Networks (XGNNs) have garnered increasing attention for enhancing the transparency of Graph Neural Networks (GNNs), which are the leading methods for learning from graph-structured data. While existing XGNNs primarily focus on improving explanation quality, their robustness under adversarial attacks remains largely unexplored. Recent studies have shown that even minor perturbations to graph structure can significantly alter the explanation outcomes of XGNNs, posing serious risks in safety-critical applications such as drug discovery. In this paper, we take the first step toward addressing this challenge by introducing XGNNCert, the first provably robust XGNN. XGNNCert offers formal guarantees that the explanation results will remain consistent, even under worst-case graph perturbation attacks, as long as the number of altered edges is within a bounded limit. Importantly, this robustness is achieved without compromising the original GNN's predictive performance. Evaluation results on multiple graph datasets and GNN explainers show the effectiveness of XGNNCert. Source code is available at https://github.com/JetRichardLee/XGNNCert.
UR - https://www.scopus.com/pages/publications/105010213655
UR - https://www.scopus.com/pages/publications/105010213655#tab=citedBy
M3 - Conference contribution
AN - SCOPUS:105010213655
T3 - 13th International Conference on Learning Representations, ICLR 2025
SP - 45172
EP - 45191
BT - 13th International Conference on Learning Representations, ICLR 2025
PB - International Conference on Learning Representations, ICLR
T2 - 13th International Conference on Learning Representations, ICLR 2025
Y2 - 24 April 2025 through 28 April 2025
ER -