PROPOSAL
Privacy Preserving Transformations
Given a private database that I can access only through specific queries, there is still a lot I can learn on its entries [1]. Differential Privacy (DP) tackles this: letting me learn the (approximate) result of complex queries on a database, but preventing me from learning much about its specific entries. The basic approach of DP often boils down to: “apply a privacy-preserving transformation T on the result of the query F or on the data itself before computing F”. In this project we are interested in transformations of the latter type, i.e. transformations that we can apply on the data before we carry out any computation.
Recent work in our group [2] has shown how to efficiently execute these transformations directly in hardware and we are interested in using this approach on a more general class of transformations of this type. To do that we need to study their underlying structure: what is a common pattern in the way we compute the transformations? Can we represent them all as matrix multiplications, etc.?
This project will be co-supervised by Zsolt István and Matteo Campanelli from Aarhus University.
Appendix:
Notice that not all transformations are of the form we are interested in. The basic Laplacian mechanism for example is defined for a query F as:
TF(X) := F(X) + LaplacianNoise()
where X is the database.
An example of something that can be useful for us resembles more Warner’s randomized response.
An intermediate example is this privacy preserving approach to compute the maximum: “For each of the count queries add noise LaplacianNoise() to each, and report the index of the largest noised query.”
[1] [https://dl.acm.org/doi/pdf/10.1145/1559845.1559850]
[2] [https://gitlab.software.imdea.org/zistvan-public/eurosys20-poster-privacy/-/blob/master/submission.pdf]