Non-interactive, Secure Verifiable Aggregation for Decentralized, Privacy-Preserving Learning
Résumé
We propose a novel primitive called NIVA that allows the distributed aggregation of multiple users’ secret inputs by multiple untrusted servers. The returned aggregation result can be publicly verified in a non-interactive way, i.e. the users are not required to participate in the aggregation except for providing their secret inputs. NIVA allows the secure computation of the sum of a large amount of users’ data and can be employed, for example, in the federated learning setting in order to aggregate the model updates for a deep neural network. We implement NIVA and evaluate its communication and execution performance and compare it with the current state-of-the-art, i.e. Segal et al. protocol (CCS 2017) and Xu et al. VerifyNet protocol (IEEE TIFS 2020), resulting in better user’s communicated data and execution time.