TY - GEN
T1 - Adversary for Social Good
T2 - 18th EAI International Conference on Security and Privacy in Communication Networks, SecureComm 2022
AU - Li, Xiaoting
AU - Chen, Lingwei
AU - Wu, Dinghao
N1 - Publisher Copyright:
© 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
PY - 2023
Y1 - 2023
N2 - As social networks become indispensable for people’s daily lives, inference attacks pose significant threat to users’ privacy where attackers can infiltrate users’ information and infer their private attributes. In particular, social networks are represented as graph-structured data, maintaining rich user activities and complex relationships among them. This enables attackers to deploy state-of-the-art graph neural networks (GNNs) to automate attribute inference attacks for users’ privacy disclosure. To address this challenge, in this paper, we leverage the vulnerability of GNNs to adversarial attacks, and propose a new graph adversarial method, called Attribute-Obfuscating Attack (AttrOBF) to mislead GNNs into misclassification and thus protect user attribute privacy against GNN-based inference attacks on social networks. Different from the prior attacks using perturbations on graph structure or node features, AttrOBF provides a more practical formulation by obfuscating optimal training user attribute values, and also advances the attribute obfuscation by solving the unavailability issue of test attribute annotations, black-box setting, bi-level optimization, and non-differentiable obfuscating operation. We demonstrate the effectiveness of AttrOBF on user attribute obfuscation by extensive experiments over three real-world social network datasets. We believe our work yields great potential of applying adversarial attacks to attribute protection on social networks.
AB - As social networks become indispensable for people’s daily lives, inference attacks pose significant threat to users’ privacy where attackers can infiltrate users’ information and infer their private attributes. In particular, social networks are represented as graph-structured data, maintaining rich user activities and complex relationships among them. This enables attackers to deploy state-of-the-art graph neural networks (GNNs) to automate attribute inference attacks for users’ privacy disclosure. To address this challenge, in this paper, we leverage the vulnerability of GNNs to adversarial attacks, and propose a new graph adversarial method, called Attribute-Obfuscating Attack (AttrOBF) to mislead GNNs into misclassification and thus protect user attribute privacy against GNN-based inference attacks on social networks. Different from the prior attacks using perturbations on graph structure or node features, AttrOBF provides a more practical formulation by obfuscating optimal training user attribute values, and also advances the attribute obfuscation by solving the unavailability issue of test attribute annotations, black-box setting, bi-level optimization, and non-differentiable obfuscating operation. We demonstrate the effectiveness of AttrOBF on user attribute obfuscation by extensive experiments over three real-world social network datasets. We believe our work yields great potential of applying adversarial attacks to attribute protection on social networks.
UR - http://www.scopus.com/inward/record.url?scp=85148040286&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85148040286&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-25538-0_37
DO - 10.1007/978-3-031-25538-0_37
M3 - Conference contribution
AN - SCOPUS:85148040286
SN - 9783031255373
T3 - Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
SP - 710
EP - 728
BT - Security and Privacy in Communication Networks - 18th EAI International Conference, SecureComm 2022, Proceedings
A2 - Li, Fengjun
A2 - Liang, Kaitai
A2 - Lin, Zhiqiang
A2 - Katsikas, Sokratis K.
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 17 October 2022 through 19 October 2022
ER -