Are Large-scale Datasets Necessary for Self-Supervised Pre-training? - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Preprints, Working Papers, ... Year : 2022

Are Large-scale Datasets Necessary for Self-Supervised Pre-training?

Abstract

Pre-training models on large scale datasets, like Ima-geNet, is a standard practice in computer vision. This paradigm is especially effective for tasks with small training sets, for which high-capacity models tend to overfit. In this work, we consider a self-supervised pre-training scenario that only leverages the target task data. We consider datasets, like Stanford Cars, Sketch or COCO, which are order(s) of magnitude smaller than Imagenet. Our study shows that denoising autoencoders, such as BEiT or a variant that we introduce in this paper, are more robust to the type and size of the pre-training data than popular self-supervised methods trained by comparing image embeddings. We obtain competitive performance compared to ImageNet pre-training on a variety of classification datasets, from different domains. On COCO, when pretraining solely using COCO images, the detection and instance segmentation performance surpasses the supervised ImageNet pre-training in a comparable setting.
Fichier principal
Vignette du fichier
splitmask_haltools.pdf (577.9 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03572721 , version 1 (14-02-2022)

Identifiers

  • HAL Id : hal-03572721 , version 1

Cite

Alaaeldin El-Nouby, Gautier Izacard, Hugo Touvron, Ivan Laptev, Hervé Jégou, et al.. Are Large-scale Datasets Necessary for Self-Supervised Pre-training?. 2022. ⟨hal-03572721⟩
203 View
265 Download

Share

Gmail Facebook X LinkedIn More