On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models - Inria - Institut national de recherche en sciences et technologies du numérique
Article Dans Une Revue Transactions on Machine Learning Research Journal Année : 2024

On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models

Résumé

Large pretrained visual models exhibit remarkable generalization across diverse recognition tasks. Yet, real-world applications often demand compact models tailored to specific problems. Variants of knowledge distillation have been devised for such a purpose, enabling task-specific compact models (the students) to learn from a generic large pretrained one (the teacher). In this paper, we show that the excellent robustness and versatility of recent pretrained models challenge common practices established in the literature, calling for a new set of optimal guidelines for task-specific distillation. To address the lack of samples in downstream tasks, we also show that a variant of Mixup based on stable diffusion complements standard data augmentation. This strategy eliminates the need for engineered text prompts and improves distillation of generic models into streamlined specialized networks.
Fichier principal
Vignette du fichier
2242_On_Good_Practices_for_Tas.pdf (15.9 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04847170 , version 1 (18-12-2024)

Licence

Identifiants

  • HAL Id : hal-04847170 , version 1

Citer

Juliette Marrie, Michael Arbel, Julien Mairal, Diane Larlus. On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models. Transactions on Machine Learning Research Journal, 2024. ⟨hal-04847170⟩
0 Consultations
0 Téléchargements

Partager

More