Ranking by calibrated AdaBoost
Résumé
This paper describes the ideas and methodologies that we used in the Yahoo learning-to- rank challenge1. Our technique is essentially pointwise with a listwise touch at the last combination step. The main ingredients of our approach are 1) preprocessing (querywise normalization) 2) multi-class AdaBoost.MH 3) regression calibration, and 4) an expo- nentially weighted forecaster for model combination. In post-challenge analysis we found that preprocessing and training AdaBoost with a wide variety of hyperparameters im- proved individual models significantly, the final listwise ensemble step was crucial, whereas calibration helped only in creating diversity.
Domaines
Apprentissage [cs.LG]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...