Abstract

Many big data applications give rise to distributional data wherein objects or individuals are naturally represented as K-tuples of bags of feature values where feature values in each bag are sampled from a feature and object specific distribution. We formulate and solve the problem of learning classifiers from distributional data. We consider three classes of methods for learning distributional classifiers: (i) those that rely on aggregation to encode distributional data into tuples of attribute values, i.e., instances that can be handled by traditional supervised machine learning algorithms, (ii) those that are based on generative models of distributional data, and (iii) the discriminative counterparts of the generative models considered in (ii) above. We compare the performance of the different algorithms on real-world as well as synthetic distributional data sets. The results of our experiments demonstrate that classifiers that take advantage of the information available in the distributional instance representation outperform or match the performance of those that fail to fully exploit such information.

Original languageEnglish (US)
Title of host publicationProceedings - 2013 IEEE International Congress on Big Data, BigData 2013
Pages302-309
Number of pages8
DOIs
StatePublished - 2013
Event2013 IEEE International Congress on Big Data, BigData 2013 - Santa Clara, CA, United States
Duration: Jun 27 2013Jul 2 2013

Publication series

NameProceedings - 2013 IEEE International Congress on Big Data, BigData 2013

Other

Other2013 IEEE International Congress on Big Data, BigData 2013
Country/TerritoryUnited States
CitySanta Clara, CA
Period6/27/137/2/13

All Science Journal Classification (ASJC) codes

  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Learning classifiers from distributional data'. Together they form a unique fingerprint.

Cite this