Robustness and Reliability of Machine Learning-Based Intrusion Detection Systems against Adversarial Poisoning Attacks

Main Article Content

Sushil Buriya, Neelam Sharma

Abstract

Intrusion Detection Systems (IDSs) are an essential part of security systems to protect cyber-space against the wide range of cyber-threats. In this paper, we aim to evaluate the robustness and reliability of different machine learning (ML) algorithms for IDS implementation against the feature poisoning adversarial attack and highlight their performance under adversarial and non-adversarial conditions. In order to assess the performance of ML-based IDS against poisoned training data, one benchmarked dataset NSL-KDD has been chosen. Further, Poison samples have been crafted using the Fast Gradient Sign Method (FGSM). The results show a significant decrease in accuracy and increase in false positive and false negative rate that also degrades the recall rate (RR) and precision rate (PR) along with F1-score across all the ML models when trained with poisoning datasets. The findings demonstrate that gradient boosting classifier outperforms the Logistic Regression and Multi-Layer Perceptron (MLP) algorithms with impressive results of 99.5% accuracy and 99.3% recall rate.

Article Details

Section
Articles