Statistically Valid Inferences from Privacy Protected Data 

Venerable procedures for privacy protection and data sharing within academia, companies, and governments, and between sectors, have been proven to be completely inadequate (e.g., respondents in de-identified surveys can usually be re-identified). At the same time, unprecedented quantities of data that could help social scientists understand and ameliorate the challenges of human society are presently locked away inside companies, governments, and other organizations, in part because of worries about privacy violations. We address these problems with a general-purpose data access and analysis system with mathematical guarantees of privacy for individuals who may be represented in the data, statistical guarantees for researchers seeking insights from it, and protection for society from some fallacious scientific conclusions. We build on the standard of "differential privacy'' but, unlike most such approaches, we also correct for the serious statistical biases induced by privacy-preserving procedures, provide a proper accounting for statistical uncertainty, and impose minimal constraints on the choice of data analytic methods and types of quantities estimated. Our algorithm is easy to implement, simple to use, and computationally efficient; we also offer open source software to illustrate all our methods.

Based on articles (with different subsets of {Georgie Evans, Meg Schwenzfeier, Adam Smith, and Abhradeep Thakurta}) available at GaryKing.org/privacy.