SLACK: Stable Learning of Augmentations with Cold-start and KL regularization - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2023

SLACK: Stable Learning of Augmentations with Cold-start and KL regularization

Résumé

Data augmentation is known to improve the generalization capabilities of neural networks, provided that the set of transformations is chosen with care, a selection often performed manually. Automatic data augmentation aims at automating this process. However, most recent approaches still rely on some prior information; they start from a small pool of manually-selected default transformations that are either used to pretrain the network or forced to be part of the policy learned by the automatic data augmentation algorithm. In this paper, we propose to directly learn the augmentation policy without leveraging such prior knowledge. The resulting bilevel optimization problem becomes more challenging due to the larger search space and the inherent instability of bilevel optimization algorithms. To mitigate these issues (i) we follow a successive cold-start strategy with a Kullback-Leibler regularization, and (ii) we parameterize magnitudes as continuous distributions. Our approach leads to competitive results on standard benchmarks despite a more challenging setting, and generalizes beyond natural images.
Fichier principal
Vignette du fichier
SLACK_cvpr23.pdf (4.3 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04135386 , version 1 (20-06-2023)

Licence

Identifiants

Citer

Juliette Marrie, Michael Arbel, Diane Larlus, Julien Mairal. SLACK: Stable Learning of Augmentations with Cold-start and KL regularization. CVPR 2023 - IEEE/CVF Conference onComputer Vision and Pattern Recognition, IEEE, Jun 2023, Vancouver, Canada. pp.1-17, ⟨10.1109/cvpr52729.2023.02328⟩. ⟨hal-04135386⟩
112 Consultations
45 Téléchargements

Altmetric

Partager

More