Can sparsity improve the privacy of neural networks?
Abstract
Sparse neural networks are mainly motivated by ressource efficiency since they use fewer parameters than their dense counterparts but still reach comparable accuracies. This article empirically investigates whether sparsity could also improve the privacy of the data used to train the networks. The experiments show positive correlations between the sparsity of the model, its privacy, and its classification error. Simply comparing the privacy of two models with different sparsity levels can yield misleading conclusions on the role of sparsity, because of the additional correlation with the classification error. From this perspective, some caveats are raised about previous works that investigate sparsity and privacy.
Domains
Artificial Intelligence [cs.AI]
Fichier principal
HAL_gretsi.pdf (600.35 Ko)
Télécharger le fichier
MIApipelineENG-crop.pdf (432.68 Ko)
Télécharger le fichier
S_1.pdf (2.3 Ko)
Télécharger le fichier
S_2.pdf (2.34 Ko)
Télécharger le fichier
S_3.pdf (2.32 Ko)
Télécharger le fichier
S_4.pdf (2.35 Ko)
Télécharger le fichier
phasis_diagram_english.pdf (16.4 Ko)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|
Origin | Files produced by the author(s) |
---|