89 Confidence Framework in Classification
Download citation file:
- Ris (Zotero)
- Reference Manager
There is no universally accepted methodology to determine how much confidence one should have in a classifier output. This research proposes a framework to determine the level of confidence in an indication from a classifier system where the output is a measurement value. The level of indication confidence is comprised of “classifier confidence” and “exemplar confidence.” Classifier confidence is estimated using the exemplars in a test set and is reflected in quantities such as classification accuracy, average entropy, and confidence regions about these quantities. A classifier exhibits exemplar confidence through the magnitude of its posterior probability estimate for that exemplar. Classifier confidence and exemplar confidence are combined to form the level of confidence for a given classifier indication from a given classifier. In this paradigm, posterior probabilities are essentially adjusted based upon the confidence in the underlying classifier. This paradigm is applied to synthetic data as well as two real-world data sets.