Leveraging Local Variation in Data: Sampling and Weighting Schemes for Supervised Deep Learning
Résumé
In the context of supervised learning of a function by a neural network, we claim and empirically verify that the neural network yields better results when the distribution of the data set focuses on regions where the function to learn is steep. We first traduce this assumption in a mathematically workable way using Taylor expansion and emphasize a new training distribution based on the derivatives of the function to learn. Then, theoretical derivations allow construction of a methodology that we call variance based samples weighting (VBSW). VBSW uses labels' local variance to weight the training points. This methodology is general, scalable, cost-effective, and significantly increases the performances of a large class of neural networks for various classification and regression tasks on image, text, and multivariate data. We highlight its benefits with experiments involving neural networks from linear models to ResNet and BERT.
Origine | Fichiers produits par l'(les) auteur(s) |
---|