Meta-classifiers for multimodal document classification

Scott Deeann Chen, Vishal Monga, Pierre Moulin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

This paper proposes learning algorithms for the problem of multimodal document classification. Specifically, we develop classifiers that automatically assign documents to categories by exploiting features from both text as well as image content. In particular, we use meta-classifiers that combine state-of-the-art text and image based classifiers into making joint decisions. The two meta classifiers we choose are based on support vector machines and Adaboost. Experiments on real-world databases from Wikipedia demonstrate the benefits of a joint exploitation of these modalities.

Original languageEnglish (US)
Title of host publication2009 IEEE International Workshop on Multimedia Signal Processing, MMSP '09
DOIs
StatePublished - 2009
Event2009 IEEE International Workshop on Multimedia Signal Processing, MMSP '09 - Rio De Janeiro, Brazil
Duration: Oct 5 2009Oct 7 2009

Other

Other2009 IEEE International Workshop on Multimedia Signal Processing, MMSP '09
Country/TerritoryBrazil
CityRio De Janeiro
Period10/5/0910/7/09

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'Meta-classifiers for multimodal document classification'. Together they form a unique fingerprint.

Cite this