Artigo Acesso aberto Revisado por pares

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

2022; IEEE Computer Society; Linguagem: Inglês

10.1109/tdsc.2022.3166671

ISSN

2160-9209

Autores

Jakub Breier, Xiaolu Hou, Martín Ochoa, Jesús Solano,

Tópico(s)

Anomaly Detection Techniques and Applications

Resumo

Neural network implementations are known to be vulnerable to physical attack vectors such as fault injection attacks. As of now, these attacks were only utilized during the inference phase. In this work, we explore a novel attack paradigm by injecting faults during the training phase in a way that the resulting network can be attacked during deployment without the necessity of further faulting. We discuss attacks against ReLU activation functions that make it possible to generate a family of malicious inputs, which are called fooling inputs, to be used at inference time to induce controlled misclassifications. Such malicious inputs are obtained by mathematically solving a system of linear equations that would cause a particular behaviour on the attacked activation functions, similar to the one induced in training through faulting. We call such attacks fooling backdoors as the faults at training phase inject backdoors into the network that allow an attacker to produce fooling inputs. We evaluate our approach against multi-layer perceptron networks and convolutional networks on a popular image classification task obtaining high attack success rates (60% - 100%) and high classification confidence when as little as 25 neurons are attacked while preserving high accuracy on the original classification task.

Referência(s)