Risk bounds for PU learning under Selected At Random assumption - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Journal Articles Journal of Machine Learning Research Year : 2023

Risk bounds for PU learning under Selected At Random assumption

Abstract

Positive-unlabeled learning (PU learning) is known as a special case of semi-supervised binary classification where only a fraction of positive examples are labeled. The challenge is then to find the correct classifier despite this lack of information. Recently, new methodologies have been introduced to address the case where the probability of being labeled may depend on the covariates. In this paper, we are interested in establishing risk bounds for PU learning under this general assumption. In addition, we quantify the impact of label noise on PU learning compared to standard classification setting. Finally, we provide a lower bound on minimax risk proving that the upper bound is almost optimal.
Fichier principal
Vignette du fichier
PULearning_theory.pdf (370.61 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03526889 , version 1 (14-01-2022)

Licence

Identifiers

Cite

Olivier Coudray, Christine Keribin, Pascal Massart, Patrick Pamphile. Risk bounds for PU learning under Selected At Random assumption. Journal of Machine Learning Research, 2023, 24 (107), pp.1-31. ⟨hal-03526889⟩
93 View
52 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More