We present a new methodology for handling errors of Artificial Intelligence (AI) by introducing weakly supervised AI error correctors with a priori performance guarantees. These AI correctors are auxiliary maps whose role is to moderate the decisions of some previously constructed underlying classifier by either approving or rejecting its decisions. The rejection of a decision can be used as a signal to suggest abstaining from making a decision. A key technical focus of the work is in providing performance guarantees for these new AI correctors through bounds on the probabilities of incorrect decisions. These bounds are distribution agnostic and do not rely on assumptions on the data dimension. Our empirical example illustrates how the framework can be applied to improve the performance of an image classifier in a challenging real-world task where training data are scarce.
Funding
10.13039/100010343-Scan
History
Author affiliation
College of Science & Engineering
College of Social Sci Arts and Humanities
Comp' & Math' Sciences
Archaeology & Ancient History
Source
2024 International Joint Conference on Neural Networks (IJCNN)
Version
AM (Accepted Manuscript)
Published in
2024 International Joint Conference on Neural Networks (IJCNN)