WisdomNet: trustable machine learning toward error-free classification

Truong X. Tran, Ramazan S. Aygun

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Misclassification is a critical problem in many machine learning applications.Since even the classifier models with high accuracy (e.g., > 95%) still introduce some misclassification error, it may not be possible to rely on the output of a classifier. In this paper, we introduce trustable learning, which prompts the learning model to yield only the true output, thus avoiding misclassifications. Whenever the model cannot decide the output accurately, the learning model should indicate that there could be a misclassification error if it is forced to classify, and hence, it should reject to make a decision or defer it to a human expert. Therefore, we develop a methodology for trustable learning and apply it to artificial neural networks and show that it is possible to develop a classifier with 0% misclassification error. We propose a novel neural network architecture named WisdomNet that could provide zero prediction error by introducing an additional neuron named as conjugate neuron that would indicate whether the network is able to classify the data correctly or not. The WisdomNet architecture can be applied to any previously built model, and we have evaluated WisdomNet with several network architectures such as multilayer perceptron, convolutional neural network,and deep network on different data sets. The results show that the WisdomNet is able to reduce the classification error rate to 0%, while labeling the data is difficult to classify as ‘reject’ at a low percentage of within around 10%.

Original languageEnglish (US)
Pages (from-to)2719-2734
Number of pages16
JournalNeural Computing and Applications
Volume33
Issue number7
DOIs
StatePublished - Apr 2021

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'WisdomNet: trustable machine learning toward error-free classification'. Together they form a unique fingerprint.

Cite this