TY - GEN
T1 - Unveiling the Secrets without Data
T2 - 33rd USENIX Security Symposium, USENIX Security 2024
AU - Zhuang, Yuanxin
AU - Shi, Chuan
AU - Zhang, Mengmei
AU - Chen, Jinghui
AU - Lyu, Lingjuan
AU - Zhou, Pan
AU - Sun, Lichao
N1 - Publisher Copyright:
© USENIX Security Symposium 2024.All rights reserved.
PY - 2024
Y1 - 2024
N2 - Graph neural networks (GNNs) play a crucial role in various graph applications, such as social science, biology, and molecular chemistry. Despite their popularity, GNNs are still vulnerable to intellectual property threats. Previous studies have demonstrated the susceptibility of GNN models to model extraction attacks, where attackers steal the functionality of GNNs by sending queries and obtaining model responses. However, existing model extraction attacks often assume that the attacker has access to specific information about the victim model's training data, including node attributes, connections, and the shadow dataset. This assumption is impractical in real-world scenarios. To address this issue, we propose STEALGNN, the first data-free model extraction attack framework against GNNs. STEALGNN advances prior GNN extraction attacks in three key aspects: 1) It is completely data-free, as it does not require actual node features or graph structures to extract GNN models. 2) It constitutes a full-rank attack that can be applied to node classification and link prediction tasks, posing significant intellectual property threats across a wide range of graph applications. 3) It can handle the most challenging hard-label attack setting, where the attacker possesses no knowledge about the target GNN model and can only obtain predicted labels through querying the victim model. Our experimental results on four benchmark graph datasets demonstrate the effectiveness of STEALGNN in attacking representative GNN models.
AB - Graph neural networks (GNNs) play a crucial role in various graph applications, such as social science, biology, and molecular chemistry. Despite their popularity, GNNs are still vulnerable to intellectual property threats. Previous studies have demonstrated the susceptibility of GNN models to model extraction attacks, where attackers steal the functionality of GNNs by sending queries and obtaining model responses. However, existing model extraction attacks often assume that the attacker has access to specific information about the victim model's training data, including node attributes, connections, and the shadow dataset. This assumption is impractical in real-world scenarios. To address this issue, we propose STEALGNN, the first data-free model extraction attack framework against GNNs. STEALGNN advances prior GNN extraction attacks in three key aspects: 1) It is completely data-free, as it does not require actual node features or graph structures to extract GNN models. 2) It constitutes a full-rank attack that can be applied to node classification and link prediction tasks, posing significant intellectual property threats across a wide range of graph applications. 3) It can handle the most challenging hard-label attack setting, where the attacker possesses no knowledge about the target GNN model and can only obtain predicted labels through querying the victim model. Our experimental results on four benchmark graph datasets demonstrate the effectiveness of STEALGNN in attacking representative GNN models.
UR - http://www.scopus.com/inward/record.url?scp=85203819944&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85203819944&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85203819944
T3 - Proceedings of the 33rd USENIX Security Symposium
SP - 5251
EP - 5268
BT - Proceedings of the 33rd USENIX Security Symposium
PB - USENIX Association
Y2 - 14 August 2024 through 16 August 2024
ER -