Preprints, Working Papers, ... Year : 2022

High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent

Abstract

In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the (worst-case) utility of DP-ERM reduces as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model's parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients' (approximately) greatest entry. We show theoretically that DP-GCD can improve utility by exploiting structural properties of the problem's solution (such as sparsity or quasi-sparsity), with very fast progress in early iterations. We then illustrate this numerically, both on synthetic and real datasets. Finally, we describe promising directions for future work.
Fichier principal
Vignette du fichier
greedy-coordinate-descent.pdf (541) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03714465 , version 1 (05-07-2022)
hal-03714465 , version 2 (21-10-2022)
hal-03714465 , version 3 (09-04-2023)

Identifiers

  • HAL Id : hal-03714465 , version 1

Cite

Paul Mangold, Aurélien Bellet, Joseph Salmon, Marc Tommasi. High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent. 2022. ⟨hal-03714465v1⟩
211 View
146 Download

Share

More