Joint image and text representation for aesthetics analysis

Ye Zhou, Xin Lu, Junping Zhang, James Z. Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

39 Scopus citations

Abstract

Image aesthetics assessment is essential to multimedia applications such as image retrieval, and personalized image search and recommendation. Primarily relying on visual information and manually-supplied ratings, previous studies in this area have not adequately utilized higher-level semantic information. We incorporate additional textual phrases from user comments to jointly represent image aesthetics utilizing multimodal Deep Boltzmann Machine. Given an image, without requiring any associated user comments, the proposed algorithm automatically infers the joint representation and predicts the aesthetics category of the image. We construct the AVA-Comments dataset to systematically evaluate the performance of the proposed algorithm. Experimental results indicate that the proposed joint representation improves the performance of aesthetics assessment on the benchmarking AVA dataset, comparing with only visual features.

Original languageEnglish (US)
Title of host publicationMM 2016 - Proceedings of the 2016 ACM Multimedia Conference
PublisherAssociation for Computing Machinery, Inc
Pages262-266
Number of pages5
ISBN (Electronic)9781450336031
DOIs
StatePublished - Oct 1 2016
Event24th ACM Multimedia Conference, MM 2016 - Amsterdam, United Kingdom
Duration: Oct 15 2016Oct 19 2016

Publication series

NameMM 2016 - Proceedings of the 2016 ACM Multimedia Conference

Other

Other24th ACM Multimedia Conference, MM 2016
Country/TerritoryUnited Kingdom
CityAmsterdam
Period10/15/1610/19/16

All Science Journal Classification (ASJC) codes

  • Computer Graphics and Computer-Aided Design
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Software

Fingerprint

Dive into the research topics of 'Joint image and text representation for aesthetics analysis'. Together they form a unique fingerprint.

Cite this