On Evaluating Interestingness Measures for Closed Itemsets
Résumé
There are a lot of measures for selecting interesting itemsets. But which one is better? In this paper we introduce a methodology for evaluating interesting-ness measures. This methodology relies on supervised classification. It allows us to avoid experts and artificial datasets in the evaluation process. We apply our method-ology to evaluate promising measures for itemset selection, such as leverage and stability. We show that although there is no evident winner between them, stability has a slightly better performance.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...