A transductive extension of maximum entropy/iterative scaling for decision aggregation in distributed classification

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Many ensemble classification systems apply supervised learning to design a function for combining classifier decisions, which requires common labeled training samples across the classifier ensemble. Without such data, fixed rules (voting, Bayes rule) are usually applied. [1] alternatively proposed a transductive constraint-based learning strategy to learn how to fuse decisions even without labeled examples. There, decisions on test samples were chosen to satisfy constraints measured by each local classifier. There are two main limitations of that work. First, feasibility of the constraints was not guaranteed. Second, heuristic learning was applied. Here we overcome both problems via a transductive extension of maximum entropy/improved iterative scaling for aggregation in distributed classification. This method is shown to achieve improved decision accuracy over the earlier transductive approach on a number of UC Irvine data sets.

Original languageEnglish (US)
Title of host publication2008 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP
Pages1865-1868
Number of pages4
DOIs
StatePublished - 2008
Event2008 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP - Las Vegas, NV, United States
Duration: Mar 31 2008Apr 4 2008

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Other

Other2008 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP
Country/TerritoryUnited States
CityLas Vegas, NV
Period3/31/084/4/08

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'A transductive extension of maximum entropy/iterative scaling for decision aggregation in distributed classification'. Together they form a unique fingerprint.

Cite this