On the Learnability of Concepts - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

On the Learnability of Concepts

Adam Sutton
  • Fonction : Auteur
  • PersonId : 1245254

Résumé

Word Embeddings are used widely in multiple Natural Language Processing (NLP) applications. They are coordinates associated with each word in a dictionary, inferred from statistical properties of these words in a large corpus. In this paper we introduce the notion of “concept” as a list of words that have shared semantic content. We use this notion to analyse the learnability of certain concepts, defined as the capability of a classifier to recognise unseen members of a concept after training on a random subset of it. We first use this method to measure the learnability of concepts on pretrained word embeddings. We then develop a statistical analysis of concept learnability, based on hypothesis testing and ROC curves, in order to compare the relative merits of various embedding algorithms using a fixed corpora and hyper parameters. We find that all embedding methods capture the semantic content of those word lists, but fastText performs better than the others.
Fichier principal
Vignette du fichier
500087_1_En_35_Chapter.pdf (364.86 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04060668 , version 1 (06-04-2023)

Licence

Paternité

Identifiants

Citer

Adam Sutton, Nello Cristianini. On the Learnability of Concepts. 16th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), Jun 2020, Neos Marmaras, Greece. pp.420-432, ⟨10.1007/978-3-030-49186-4_35⟩. ⟨hal-04060668⟩
10 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More