VICReg: Variance-Invariance-Covariance Regularization For Self-Supervised Learning
Résumé
Recent self-supervised methods for image representation learning maximize the
agreement between embedding vectors produced by encoders fed with different
views of the same image. The main challenge is to prevent a collapse in which
the encoders produce constant or non-informative vectors. We introduce VICReg
(Variance-Invariance-Covariance Regularization), a method that explicitly avoids
the collapse problem with two regularizations terms applied to both embeddings
separately: (1) a term that maintains the variance of each embedding dimension
above a threshold, (2) a term that decorrelates each pair of variables. Unlike
most other approaches to the same problem, VICReg does not require techniques
such as: weight sharing between the branches, batch normalization, feature-wise
normalization, output quantization, stop gradient, memory banks, etc., and achieves
results on par with the state of the art on several downstream tasks. In addition, we
show that our variance regularization term stabilizes the training of other methods
and leads to performance improvements.
Origine | Fichiers produits par l'(les) auteur(s) |
---|