Differentially private model selection via stability arguments and the robustness of the Lasso

Adam Smith, Abhradeep Thakurta

Research output: Contribution to journalConference articlepeer-review

89 Scopus citations

Abstract

We design differentially private algorithms for statistical model selection. Given a data set and a large, discrete collection of "models", each of which is a family of probability distributions, the goal is to determine the model that best "fits" the data. This is a basic problem in many areas of statistics and machine learning. We consider settings in which there is a well-defined answer, in the following sense: Suppose that there is a nonprivate model selection procedure f which is the reference to which we compare our performance. Our differentially private algorithms output the correct value f(D) whenever f is stable on the input data set D. We work with two notions, perturbation stability and subsampling stability. We give two classes of results: generic ones, that apply to any function with discrete output set; and specific algorithms for the problem of sparse linear regression. The algorithms we describe are efficient and in some cases match the optimal nonprivate asymptotic sample complexity. Our algorithms for sparse linear regression require analyzing the stability properties of the popular LASSO estimator. We give sufficient conditions for the LASSO estimator to be robust to small changes in the data set, and show that these conditions hold with high probability under essentially the same stochastic assumptions that are used in the literature to analyze convergence of the LASSO.

Original languageEnglish (US)
Pages (from-to)819-850
Number of pages32
JournalJournal of Machine Learning Research
Volume30
StatePublished - 2013
Event26th Conference on Learning Theory, COLT 2013 - Princeton, NJ, United States
Duration: Jun 12 2013Jun 14 2013

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Differentially private model selection via stability arguments and the robustness of the Lasso'. Together they form a unique fingerprint.

Cite this