Capítulo de livro Revisado por pares

Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts

2019; Springer Science+Business Media; Linguagem: Inglês

10.1007/978-3-030-30244-3_22

ISSN

1611-3349

Autores

Nuno Martins, José Magalhães Cruz, Tiago Cruz, Pedro Henriques Abreu,

Tópico(s)

Advanced Malware Detection Techniques

Resumo

Adversarial machine learning is an area of study that examines both the generation and detection of adversarial examples, which are inputs specially crafted to deceive classifiers, and has been extensively researched specifically in the area of image recognition, where humanly imperceptible modifications are performed on images that cause a classifier to perform incorrect predictions. The main objective of this paper is to study the behavior of multiple state of the art machine learning algorithms in an adversarial context. To perform this study, six different classification algorithms were used on two datasets, NSL-KDD and CICIDS2017, and four adversarial attack techniques were implemented with multiple perturbation magnitudes. Furthermore, the effectiveness of training the models with adversaries to improve recognition is also tested. The results show that adversarial attacks successfully deteriorate the performance of all the classifiers between 13% and 40%, with the Denoising Autoencoder being the technique with highest resilience to attacks.

Referência(s)