Differentiable PAC-Bayes Objectives with Partially Aggregated Neural Networks - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Journal Articles Entropy Year : 2021

Differentiable PAC-Bayes Objectives with Partially Aggregated Neural Networks

Abstract

We make three related contributions motivated by the challenge of training stochastic neural networks, particularly in a PAC-Bayesian setting: (1) we show how averaging over an ensemble of stochastic neural networks enables a new class of \emph{partially-aggregated} estimators; (2) we show that these lead to provably lower-variance gradient estimates for non-differentiable signed-output networks; (3) we reformulate a PAC-Bayesian bound for these networks to derive a directly optimisable, differentiable objective and a generalisation guarantee, without using a surrogate loss or loosening the bound. This bound is twice as tight as that of Letarte et al. (2019) on a similar network type. We show empirically that these innovations make training easier and lead to competitive guarantees.
Fichier principal
Vignette du fichier
2006.12228.pdf (253.54 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02879216 , version 1 (23-06-2020)

Identifiers

Cite

Felix Biggs, Benjamin Guedj. Differentiable PAC-Bayes Objectives with Partially Aggregated Neural Networks. Entropy, 2021, ⟨10.3390/e23101280⟩. ⟨hal-02879216⟩
33 View
141 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More