Artigo Acesso aberto

Consensus Adversarial Defense Method Based on Augmented Examples

2022; Institute of Electrical and Electronics Engineers; Volume: 19; Issue: 1 Linguagem: Inglês

10.1109/tii.2022.3169973

ISSN

1941-0050

Autores

Xintao Ding, Yongqiang Cheng, Yonglong Luo, Qingde Li, Prosanta Gope,

Tópico(s)

Physical Unclonable Functions (PUFs) and Hardware Security

Resumo

Deep learning has been used in many computer-vision-based industrial Internet of Things applications. However, deep neural networks are vulnerable to adversarial examples that have been crafted specifically to fool a system while being imperceptible to humans. In this article, we propose a consensus defense (Cons-Def) method to defend against adversarial attacks. Cons-Def implements classification and detection based on the consensus of the classifications of the augmented examples, which are generated based on an individually implemented intensity exchange on the red, green, and blue components of the input image. We train a CNN using augmented examples together with their original examples. For the test image to be assigned to a specific class, the class occurrence of the classifications on its augmented images should be the maximum and reach a defined threshold. Otherwise, it is detected as an adversarial example. The comparison experiments are implemented on MNIST, CIFAR-10, and ImageNet. The average defense success rate (DSR) against white-box attacks on the test sets of the three datasets is 80.3%. The average DSR against black-box attacks on CIFAR-10 is 91.4%. The average classification accuracies of Cons-Def on benign examples of the three datasets are 98.0%, 78.3%, and 66.1%. The experimental results show that Cons-Def shows a high classification performance on benign examples and is robust against white-box and black-box adversarial attacks.

Referência(s)