Lazy Adaptation Knowledge Learning Based on Frequent Closed Itemsets - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Lazy Adaptation Knowledge Learning Based on Frequent Closed Itemsets

Résumé

This paper focuses on lazy adaptation knowledge learning (LAKL) using frequent closed itemset extraction. This approach differs from eager adaptation knowledge learning (EAKL) by the number of cases used in the learning process and by the moment at which the process is triggered. Where EAKL aims to compute adaptation knowledge once on the whole case base with the idea of solving every future target problem, LAKL computes adaptation knowledge on a subset of cases close to the target problem when a new problem has to be solved. The paper presents experiments on generated datasets from Boolean functions and on a real-world dataset, studying especially how the size of the case base used impacts performance. The results show that LAKL outperforms EAKL in precision and correct answer rate and that it is therefore better not to use the whole case base for adaptation knowledge learning.
Fichier principal
Vignette du fichier
ICCBR_2023_Lazy.pdf (496.53 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04271804 , version 1 (06-11-2023)

Licence

Paternité

Identifiants

Citer

Emmanuel Nauer, Jean Lieber, Mathieu d'Aquin. Lazy Adaptation Knowledge Learning Based on Frequent Closed Itemsets. International Conference on Cased-Based Reasoning (ICCBR 2023), Jul 2023, Aberdeen, United Kingdom. pp.309-324, ⟨10.1007/978-3-031-40177-0_20⟩. ⟨hal-04271804⟩
44 Consultations
7 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More