Generative Pre-trained Transformer-Based Reinforcement Learning for Testing Web Application Firewalls
2023; IEEE Computer Society; Volume: 21; Issue: 1 Linguagem: Inglês
10.1109/tdsc.2023.3252523
ISSN2160-9209
AutoresHongliang Liang, Xiangyu Li, Da Xiao, Jie Liu, Yanjie Zhou, Aibo Wang, Jin Li,
Tópico(s)Web Application Security Vulnerabilities
ResumoWeb Application Firewalls (WAFs) are widely deployed to protect key web applications against multiple security threats, so it is important to test WAFs regularly to prevent attackers from bypassing them easily. Machine-learning-based black-box WAF testing is gaining more attention, though existing learning-based approaches have strict requirements on the source and scale of payload data and suffer from the local optimum problem, limiting their effectiveness and practical application. We propose GPTFuzzer, a practical and effective generation-based approach to test WAFs by generating attack payloads token-by-token. Specifically, we fine-tune a Generative Pre-trained Transformer language model with reinforcement learning to make GPTFuzzer have the least restrictions on payload data and thus more applicable in practice, and we use reward modeling and KL-divergence penalty to improve the effectiveness of our approach and mitigate the local optimum issue. We implement GPTFuzzer and evaluate it on two well-known open-source WAFs against three kinds of common attacks. Experimental results show that GPTFuzzer significantly outperforms state-of-the-art approaches, i.e. ML-Driven and RAT, finding up to 7.8× (3.2× on average) more bypassing payloads within 1,250,000 requests, or finding out all bypassing payloads using up to 8.1× (3.3× on average) fewer requests.
Referência(s)