TY - JOUR
T1 - KNG
T2 - 33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019
AU - Reimherr, Matthew
AU - Awan, Jordan
N1 - Funding Information:
This research was supported in part by NSF DMS 1712826, NSF SES 1853209, and NSF SES-153443 to The Pennsylvania State University. The first author is also grateful for the hospitality of the Simons Institute for the Theory of Computing at UC Berkeley.
Funding Information:
∗Research supported in part by NSF DMS 1712826, NSF SES 1853209, and the Simons Institute for the Theory of Computing at UC Berkeley.
Publisher Copyright:
© 2019 Neural information processing systems foundation. All rights reserved.
PY - 2019
Y1 - 2019
N2 - This paper presents a new mechanism for producing sanitized statistical summaries that achieve differential privacy, called the K-Norm Gradient Mechanism, or KNG. This new approach maintains the strong flexibility of the exponential mechanism, while achieving the powerful utility performance of objective perturbation. KNG starts with an inherent objective function (often an empirical risk), and promotes summaries that are close to minimizing the objective by weighting according to how far the gradient of the objective function is from zero. Working with the gradient instead of the original objective function allows for additional flexibility as one can penalize using different norms. We show that, unlike the exponential mechanism, the noise added by KNG is asymptotically negligible compared to the statistical error for many problems. In addition to theoretical guarantees on privacy and utility, we confirm the utility of KNG empirically in the settings of linear and quantile regression through simulations.
AB - This paper presents a new mechanism for producing sanitized statistical summaries that achieve differential privacy, called the K-Norm Gradient Mechanism, or KNG. This new approach maintains the strong flexibility of the exponential mechanism, while achieving the powerful utility performance of objective perturbation. KNG starts with an inherent objective function (often an empirical risk), and promotes summaries that are close to minimizing the objective by weighting according to how far the gradient of the objective function is from zero. Working with the gradient instead of the original objective function allows for additional flexibility as one can penalize using different norms. We show that, unlike the exponential mechanism, the noise added by KNG is asymptotically negligible compared to the statistical error for many problems. In addition to theoretical guarantees on privacy and utility, we confirm the utility of KNG empirically in the settings of linear and quantile regression through simulations.
UR - http://www.scopus.com/inward/record.url?scp=85088242293&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85088242293&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85088242293
SN - 1049-5258
VL - 32
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 8 December 2019 through 14 December 2019
ER -