Anomaly Detection for Insider Threats: An Objective Comparison of Machine Learning Models and Ensembles - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2021

Anomaly Detection for Insider Threats: An Objective Comparison of Machine Learning Models and Ensembles

Résumé

Insider threat detection is challenging due to the wide variety of possible attacks and the limited availability of real threat data for testing. Most previous anomaly detection studies have relied on synthetic threat data, such as the CERT insider threat dataset. However, several previous studies have used models that arguably introduce bias, such as the selective use of metrics, and reusing the same dataset with the prior knowledge of the answer labels. In this paper, we create and test a host of models following some guidelines of good conduct to produce what we believe to be a more objective comparison of these models. Our results indicate that majority voting ensembles are a simple and cost-effective way of boosting the quality of results from individual machine learning models, both on the CERT data and on a version augmented with additional attacks. We include a comparison of models with their hyperparameters optimized for different target metrics.
Fichier principal
Vignette du fichier
512098_1_En_24_Chapter.pdf (362.38 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03746050 , version 1 (04-08-2022)

Licence

Identifiants

Citer

Filip Wieslaw Bartoszewski, Mike Just, Michael A. Lones, Oleksii Mandrychenko. Anomaly Detection for Insider Threats: An Objective Comparison of Machine Learning Models and Ensembles. 36th IFIP International Conference on ICT Systems Security and Privacy Protection (SEC), Jun 2021, Oslo, Norway. pp.367-381, ⟨10.1007/978-3-030-78120-0_24⟩. ⟨hal-03746050⟩
57 Consultations
162 Téléchargements

Altmetric

Partager

More