Unveiling the Secrets without Data: Can Graph Neural Networks Be Exploited through Data-Free Model Extraction Attacks?

Yuanxin Zhuang, Chuan Shi, Mengmei Zhang, Jinghui Chen, Lingjuan Lyu, Pan Zhou, Lichao Sun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Graph neural networks (GNNs) play a crucial role in various graph applications, such as social science, biology, and molecular chemistry. Despite their popularity, GNNs are still vulnerable to intellectual property threats. Previous studies have demonstrated the susceptibility of GNN models to model extraction attacks, where attackers steal the functionality of GNNs by sending queries and obtaining model responses. However, existing model extraction attacks often assume that the attacker has access to specific information about the victim model's training data, including node attributes, connections, and the shadow dataset. This assumption is impractical in real-world scenarios. To address this issue, we propose STEALGNN, the first data-free model extraction attack framework against GNNs. STEALGNN advances prior GNN extraction attacks in three key aspects: 1) It is completely data-free, as it does not require actual node features or graph structures to extract GNN models. 2) It constitutes a full-rank attack that can be applied to node classification and link prediction tasks, posing significant intellectual property threats across a wide range of graph applications. 3) It can handle the most challenging hard-label attack setting, where the attacker possesses no knowledge about the target GNN model and can only obtain predicted labels through querying the victim model. Our experimental results on four benchmark graph datasets demonstrate the effectiveness of STEALGNN in attacking representative GNN models.

Original languageEnglish (US)
Title of host publicationProceedings of the 33rd USENIX Security Symposium
PublisherUSENIX Association
Pages5251-5268
Number of pages18
ISBN (Electronic)9781939133441
StatePublished - 2024
Event33rd USENIX Security Symposium, USENIX Security 2024 - Philadelphia, United States
Duration: Aug 14 2024Aug 16 2024

Publication series

NameProceedings of the 33rd USENIX Security Symposium

Conference

Conference33rd USENIX Security Symposium, USENIX Security 2024
Country/TerritoryUnited States
CityPhiladelphia
Period8/14/248/16/24

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Safety, Risk, Reliability and Quality

Fingerprint

Dive into the research topics of 'Unveiling the Secrets without Data: Can Graph Neural Networks Be Exploited through Data-Free Model Extraction Attacks?'. Together they form a unique fingerprint.

Cite this