Cross-Domain Authorship Attribution Using Pre-trained Language Models - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Cross-Domain Authorship Attribution Using Pre-trained Language Models

Georgios Barlas
  • Fonction : Auteur
  • PersonId : 1242576
Efstathios Stamatatos
  • Fonction : Auteur
  • PersonId : 1242577

Résumé

Authorship attribution attempts to identify the authors behind texts and has important applications mainly in cyber-security, digital humanities and social media analytics. An especially challenging but very realistic scenario is cross-domain attribution where texts of known authorship (training set) differ from texts of disputed authorship (test set) in topic or genre. In this paper, we modify a successful authorship verification approach based on a multi-headed neural network language model and combine it with pre-trained language models. Based on experiments on a controlled corpus covering several text genres where topic and genre is specifically controlled, we demonstrate that the proposed approach achieves very promising results. We also demonstrate the crucial effect of the normalization corpus in cross-domain attribution.
Fichier principal
Vignette du fichier
497040_1_En_22_Chapter.pdf (306.26 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04050592 , version 1 (29-03-2023)

Licence

Paternité

Identifiants

Citer

Georgios Barlas, Efstathios Stamatatos. Cross-Domain Authorship Attribution Using Pre-trained Language Models. 16th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), Jun 2020, Neos Marmaras, Greece. pp.255-266, ⟨10.1007/978-3-030-49161-1_22⟩. ⟨hal-04050592⟩
32 Consultations
5 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More