TY - JOUR
T1 - Extensions of transductive learning for distributed ensemble classification and application to biometric authentication
AU - Miller, David J.
AU - Pal, Siddharth
AU - Wang, Yue
N1 - Funding Information:
David J. Miller received the B.S.E. degree from Princeton University, Princeton, NJ, in 1987, the M.S.E. degree from the University of Pennsylvania, Philadelphia, PA, in 1990, and the Ph.D. degree from the University of California, Santa Barbara, in 1995, all in electrical engineering. From January 1988 through January 1990, he was with General Atronics Corp., Wyndmoor, PA. From September 1995 to July 2001 he was Assistant Professor of electrical engineering at The Pennsylvania State University, University Park campus. He is now tenured Professor of electrical engineering at The Pennsylvania State University. His research interests include machine learning, source coding and coding over noisy channels, image and video coding, bioinformatics, and data mining for network security. Dr. Miller received the National Science Foundation CAREER Award in 1996. Since 1997, he has been a Member of the Neural Networks for Signal Processing Technical Committee within the IEEE Signal Processing Society (SPS). He was General Co-chair for the 2001 IEEE Workshop on Neural Networks for Signal Processing. In 2004, Dr. Miller was appointed an Associate Editor for IEEE Transactions on Signal Processing. Dr. Miller is currently Chair of the Machine Learning for Signal Processing Technical Committee, within the IEEE SPS.
PY - 2008/12
Y1 - 2008/12
N2 - We consider ensemble classification when there is no common labeled data for designing the function which aggregates classifier decisions. In recent work, we dubbed this problem distributed ensemble classification, addressing when local classifiers are trained on different (e.g., proprietary, legacy) databases or operate on different sensing modalities. Typically, fixed (untrained) rules of classifier combination such as voting methods are used in this case. However, these may perform poorly, especially when (i) the local class priors, used in training, differ from the true (test batch) priors and (ii) classifier decisions are statistically dependent. Alternatively, we proposed several transductive methods, optimizing the combining rule for objective functions measured on the test batch. We proposed both maximum likelihood (ML) and constraint-based (CB) objectives and found that CB achieved superior performance. Here, we develop CB extensions (i) for sequential decisionmaking and (ii) exploiting additional class information contained in the local classifier feature vectors. The new sequential method is applied to biometric authentication. We demonstrate these new CB methods achieve better ensemble decision accuracy than methods which apply fixed rules in combining classifier decisions.
AB - We consider ensemble classification when there is no common labeled data for designing the function which aggregates classifier decisions. In recent work, we dubbed this problem distributed ensemble classification, addressing when local classifiers are trained on different (e.g., proprietary, legacy) databases or operate on different sensing modalities. Typically, fixed (untrained) rules of classifier combination such as voting methods are used in this case. However, these may perform poorly, especially when (i) the local class priors, used in training, differ from the true (test batch) priors and (ii) classifier decisions are statistically dependent. Alternatively, we proposed several transductive methods, optimizing the combining rule for objective functions measured on the test batch. We proposed both maximum likelihood (ML) and constraint-based (CB) objectives and found that CB achieved superior performance. Here, we develop CB extensions (i) for sequential decisionmaking and (ii) exploiting additional class information contained in the local classifier feature vectors. The new sequential method is applied to biometric authentication. We demonstrate these new CB methods achieve better ensemble decision accuracy than methods which apply fixed rules in combining classifier decisions.
UR - http://www.scopus.com/inward/record.url?scp=56149109016&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=56149109016&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2008.03.018
DO - 10.1016/j.neucom.2008.03.018
M3 - Article
AN - SCOPUS:56149109016
SN - 0925-2312
VL - 72
SP - 119
EP - 125
JO - Neurocomputing
JF - Neurocomputing
IS - 1-3
ER -