Artigo Acesso aberto Revisado por pares

Coping with AI errors with provable guarantees

2024; Elsevier BV; Volume: 678; Linguagem: Inglês

10.1016/j.ins.2024.120856

ISSN

1872-6291

Autores

Ivan Tyukin, Tatiana A. Tyukina, Daniël van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Penelope M. Allison,

Tópico(s)

Machine Learning and Data Classification

Resumo

AI errors pose a significant challenge, hindering real-world applications. This work introduces a novel approach to cope with AI errors using weakly supervised error correctors that guarantee a specific level of error reduction. Our correctors have low computational cost and can be used to decide whether to abstain from making an unsafe classification. We provide new upper and lower bounds on the probability of errors in the corrected system. In contrast to existing works, these bounds are distribution agnostic, non-asymptotic, and can be efficiently computed just using the corrector training data. They also can be used in settings with concept drifts when the observed frequencies of separate classes vary. The correctors can easily be updated, removed, or replaced in response to changes in distributions within each class without retraining the underlying classifier. The application of the approach is illustrated with two relevant challenging tasks: (i) an image classification problem with scarce training data, and (ii) moderating responses of large language models without retraining or otherwise fine-tuning.

Referência(s)
Altmetric
PlumX