Self-supervised learning with rotation-invariant kernels - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year :

Self-supervised learning with rotation-invariant kernels

Abstract

We introduce a regularization loss based on kernel mean embeddings with rotation-invariant kernels on the hypersphere (also known as dot-product kernels) for self-supervised learning of image representations. Besides being fully competitive with the state of the art, our method significantly reduces time and memory complexity for self-supervised training, making it implementable for very large embedding dimensions on existing devices and more easily adjustable than previous methods to settings with limited resources. Our work follows the major paradigm where the model learns to be invariant to some predefined image transformations (cropping, blurring, color jittering, etc.), while avoiding a degenerate solution by regularizing the embedding distribution. Our particular contribution is to propose a loss family promoting the embedding distribution to be close to the uniform distribution on the hypersphere, with respect to the maximum mean discrepancy pseudometric. We demonstrate that this family encompasses several regularizers of former methods, including uniformity-based and information-maximization methods, which are variants of our flexible regularization loss with different kernels. Beyond its practical consequences for state-of-the-art self-supervised learning with limited resources, the proposed generic regularization approach opens perspectives to leverage more widely the literature on kernel methods in order to improve self-supervised learning methods.
Fichier principal
Vignette du fichier
iclr2023_conference.pdf (440.75 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03738466 , version 1 (26-07-2022)
hal-03738466 , version 2 (03-10-2022)
hal-03738466 , version 3 (11-10-2022)
hal-03738466 , version 4 (06-03-2023)

Identifiers

Cite

Léon Zheng, Gilles Puy, Elisa Riccietti, Patrick Pérez, Rémi Gribonval. Self-supervised learning with rotation-invariant kernels. The Eleventh International Conference on Learning Representations, May 2023, Kigali, Rwanda. ⟨hal-03738466v4⟩

Relations

164 View
179 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More