Memguard: Defending against black-box membership inference attacks via adversarial examples

Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong

Research output: Chapter in Book/Report/Conference proceedingConference contribution

282 Scopus citations

Abstract

In a membership inference attack, an attacker aims to infer whether a data sample is in a target classifier's training dataset or not. Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset. Membership inference attacks pose severe privacy and security threats to the training dataset. Most existing defenses leverage differential privacy when training the target classifier or regularize the training process of the target classifier. These defenses suffer from two key limitations: 1) they do not have formal utility-loss guarantees of the confidence score vectors, and 2) they achieve suboptimal privacy-utility tradeoffs. In this work, we propose MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks. Instead of tampering the training process of the target classifier, MemGuard adds noise to each confidence score vector predicted by the target classifier. Our key observation is that attacker uses a classifier to predict member or non-member and classifier is vulnerable to adversarial examples. Based on the observation, we propose to add a carefully crafted noise vector to a confidence score vector to turn it into an adversarial example that misleads the attacker's classifier. Specifically, MemGuard works in two phases. In Phase I, MemGuard finds a carefully crafted noise vector that can turn a confidence score vector into an adversarial example, which is likely to mislead the attacker's classifier to make a random guessing at member or non-member. We find such carefully crafted noise vector via a new method that we design to incorporate the unique utility-loss constraints on the noise vector. In Phase II, MemGuard adds the noise vector to the confidence score vector with a certain probability, which is selected to satisfy a given utility-loss budget on the confidence score vector. Our experimental results on three datasets show that MemGuard can effectively defend against membership inference attacks and achieve better privacy-utility tradeoffs than existing defenses. Our work is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attacks.

Original languageEnglish (US)
Title of host publicationCCS 2019 - Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages259-274
Number of pages16
ISBN (Electronic)9781450367479
DOIs
StatePublished - Nov 6 2019
Event26th ACM SIGSAC Conference on Computer and Communications Security, CCS 2019 - London, United Kingdom
Duration: Nov 11 2019Nov 15 2019

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference26th ACM SIGSAC Conference on Computer and Communications Security, CCS 2019
Country/TerritoryUnited Kingdom
CityLondon
Period11/11/1911/15/19

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Memguard: Defending against black-box membership inference attacks via adversarial examples'. Together they form a unique fingerprint.

Cite this