Privacy Consensus in Anonymization Systems via Game Theory
Résumé
Privacy protection appears as a fundamental concern when personal data is collected, stored, and published. Several anonymization methods have been proposed to address privacy issues in private datasets. Every anonymization method has at least one parameter to adjust the level of privacy protection considering some utility for the collected data. Choosing a desirable level of privacy protection is a crucial decision and so far no systematic mechanism exists to provide directions on how to set the privacy parameter. In this paper, we model this challenge in a game theoretic framework to find consensual privacy protection levels and recognize the characteristics of each anonymization method. Our model can potentially be used to compare different anonymization methods and distinguish the settings that make one anonymization method more appealing than the others. We describe the general approach to solve such games and elaborate the procedure using k-anonymity as a sample anonymization method. Our simulations of the game results in the case of k-anonymity reveals how the equilibrium values of k depend on the number of quasi-identifiers, maximum number of repetitive records, anonymization cost, and public’s privacy behaviour.
Origine | Fichiers produits par l'(les) auteur(s) |
---|