TY - JOUR
T1 - Critic-driven ensemble classification
AU - Miller, David J.
AU - Yan, Lian
N1 - Funding Information:
Dr. Miller received the National Science Foundation Career Award in 1996.
Funding Information:
Manuscript received September 4, 1998; revised March 31, 1999. This work was supported in part by a National Science Foundation Career Award IIS-9624870. The associate editor coordinating the review of this paper and approving it for publication was Prof. James A. Bucklew.
PY - 1999
Y1 - 1999
N2 - We develop new rules for combining the estimates obtained from each classifier in an ensemble, in order to address problems involving multiple (>2) classes. A variety of techniques have been previously suggested, including averaging probability estimates from each classifier, as well as hard (0-1) voting schemes. In this work, we introduce the notion of a critic associated with each classifier, whose objective is to predict the classifier's errors. Since the critic only tackles a two-class problem, its predictions are generally more reliable than those of the classifier and, thus, can be used as the basis for improved combination rules. Several such rules are suggested here. While previous techniques are only effective when the individual classifier error rate is p<0.5, the new approach is successful, as proved under an independence assumption, even when this condition is violated - in particular, so long as p+q<1, with q the critic's error rate. More generally, critic-driven combining is found to achieve significant performance gains over alternative methods on a number of benchmark data sets. We also propose a new analytical tool for modeling ensemble performance, based on dependence between experts. This approach is substantially more accurate than the analysis based on independence that is often used to justify ensemble methods.
AB - We develop new rules for combining the estimates obtained from each classifier in an ensemble, in order to address problems involving multiple (>2) classes. A variety of techniques have been previously suggested, including averaging probability estimates from each classifier, as well as hard (0-1) voting schemes. In this work, we introduce the notion of a critic associated with each classifier, whose objective is to predict the classifier's errors. Since the critic only tackles a two-class problem, its predictions are generally more reliable than those of the classifier and, thus, can be used as the basis for improved combination rules. Several such rules are suggested here. While previous techniques are only effective when the individual classifier error rate is p<0.5, the new approach is successful, as proved under an independence assumption, even when this condition is violated - in particular, so long as p+q<1, with q the critic's error rate. More generally, critic-driven combining is found to achieve significant performance gains over alternative methods on a number of benchmark data sets. We also propose a new analytical tool for modeling ensemble performance, based on dependence between experts. This approach is substantially more accurate than the analysis based on independence that is often used to justify ensemble methods.
UR - http://www.scopus.com/inward/record.url?scp=0033327709&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0033327709&partnerID=8YFLogxK
U2 - 10.1109/78.790663
DO - 10.1109/78.790663
M3 - Article
AN - SCOPUS:0033327709
SN - 1053-587X
VL - 47
SP - 2833
EP - 2844
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
IS - 10
ER -