Semisupervised domain adaptation for mixture model based classifiers

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

This paper introduces a method for mixture model-based classifier domain adaptation, wherein one has adequate labeled training data for one (source) domain, very scarce labeled data for another (target) domain, and where the discrepancy between the source and target domain class-conditional distributions is not too great. Starting from the source domain classifier parameters, the method maximizes the likelihood of target domain data, while constrained to agree as much as possible with the target domain label information. This is achieved via an expectation maximization (EM) algorithm, where the joint distribution of the latent variables in the E-Step is parametrically constrained, in order to ensure space-partitioning implications are gleaned from the labeled target domain samples. Experiments on publicly available Internet packet-flow traffic data from different temporal and spatial domains demonstrate significant gains in classification performance compared to 1. direct porting of the source domain classifier; 2. semisupervised learning using only the target domain data; and 3. extension of an existing unsupervised domain adaptation method.

Original languageEnglish (US)
Title of host publication2012 46th Annual Conference on Information Sciences and Systems, CISS 2012
DOIs
StatePublished - 2012
Event2012 46th Annual Conference on Information Sciences and Systems, CISS 2012 - Princeton, NJ, United States
Duration: Mar 21 2012Mar 23 2012

Publication series

Name2012 46th Annual Conference on Information Sciences and Systems, CISS 2012

Other

Other2012 46th Annual Conference on Information Sciences and Systems, CISS 2012
Country/TerritoryUnited States
CityPrinceton, NJ
Period3/21/123/23/12

All Science Journal Classification (ASJC) codes

  • Information Systems

Fingerprint

Dive into the research topics of 'Semisupervised domain adaptation for mixture model based classifiers'. Together they form a unique fingerprint.

Cite this