TY - GEN
T1 - Annotating images and image objects using a hierarchical Dirichlet process model
AU - Yakhnenko, Oksana
AU - Honavar, Vasant
PY - 2008
Y1 - 2008
N2 - Many applications call for learning to label individual objects in an image where the only information available to the learner is a dataset of images with their associated captions, i.e., words that describe the image content without specifically labeling the individual objects. We address this problem using a multi-modal hierarchical Dirichlet process model (MoM-HDP) - a nonparametric Bayesian model which provides a generalization for multi-model latent Dirichlet allocation model (MoM-LDA) used for similar problems in the past. We apply this model for predicting labels of objects in images containing multiple objects. During training, the model has access to an un-segmented image and its caption, but not the labels for each object in the image. The trained model is used to predict the label for each region of interest in a segmented image. MoM-HDP generalizes a multi-modal latent Dirichlet allocation model in that it allows the number of components of the mixture model to adapt to the data. The model parameters are efficiently estimated using variational inference. Our experiments show that MoM-HDP performs just as well as or better than the MoM-LDA model (regardless the choice of the number of clusters in the MoM-LDA model).
AB - Many applications call for learning to label individual objects in an image where the only information available to the learner is a dataset of images with their associated captions, i.e., words that describe the image content without specifically labeling the individual objects. We address this problem using a multi-modal hierarchical Dirichlet process model (MoM-HDP) - a nonparametric Bayesian model which provides a generalization for multi-model latent Dirichlet allocation model (MoM-LDA) used for similar problems in the past. We apply this model for predicting labels of objects in images containing multiple objects. During training, the model has access to an un-segmented image and its caption, but not the labels for each object in the image. The trained model is used to predict the label for each region of interest in a segmented image. MoM-HDP generalizes a multi-modal latent Dirichlet allocation model in that it allows the number of components of the mixture model to adapt to the data. The model parameters are efficiently estimated using variational inference. Our experiments show that MoM-HDP performs just as well as or better than the MoM-LDA model (regardless the choice of the number of clusters in the MoM-LDA model).
UR - http://www.scopus.com/inward/record.url?scp=77955987622&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77955987622&partnerID=8YFLogxK
U2 - 10.1145/1509212.1509213
DO - 10.1145/1509212.1509213
M3 - Conference contribution
AN - SCOPUS:77955987622
SN - 9781605582610
T3 - Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
SP - 1
EP - 7
BT - Proceedings of the MDM 2008 Workshop - 9th International Workshop on Multimedia Data Mining, Held in Conjunction with the ACM SIGKDD 2008
T2 - 9th International Workshop on Multimedia Data Mining, MDM 2008, Held in Conjunction with the ACM SIGKDD 2008
Y2 - 24 August 2008 through 24 August 2008
ER -