TY - GEN
T1 - Abstractive multi-document summarization via phrase selection and merging
AU - Bing, Lidong
AU - Li, Piji
AU - Liao, Yi
AU - Lam, Wai
AU - Guo, Weiwei
AU - Passonneau, Rebecca J.
N1 - Publisher Copyright:
© 2015 Association for Computational Linguistics.
PY - 2015
Y1 - 2015
N2 - We propose an abstraction-based multidocument summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases. Different from existing abstraction-based approaches, our method first constructs a pool of concepts and facts represented by phrases from the input documents. Then new sentences are generated by selecting and merging informative phrases to maximize the salience of phrases and meanwhile satisfy the sentence construction constraints. We employ integer linear optimization for conducting phrase selection and merging simultaneously in order to achieve the global optimal solution for a summary. Experimental results on the benchmark data set TAC 2011 show that our framework outperforms the state-ofthe-Art models under automated pyramid evaluation metric, and achieves reasonably well results on manual linguistic quality evaluation.
AB - We propose an abstraction-based multidocument summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases. Different from existing abstraction-based approaches, our method first constructs a pool of concepts and facts represented by phrases from the input documents. Then new sentences are generated by selecting and merging informative phrases to maximize the salience of phrases and meanwhile satisfy the sentence construction constraints. We employ integer linear optimization for conducting phrase selection and merging simultaneously in order to achieve the global optimal solution for a summary. Experimental results on the benchmark data set TAC 2011 show that our framework outperforms the state-ofthe-Art models under automated pyramid evaluation metric, and achieves reasonably well results on manual linguistic quality evaluation.
UR - http://www.scopus.com/inward/record.url?scp=84943785681&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84943785681&partnerID=8YFLogxK
U2 - 10.3115/v1/p15-1153
DO - 10.3115/v1/p15-1153
M3 - Conference contribution
AN - SCOPUS:84943785681
T3 - ACL-IJCNLP 2015 - 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Proceedings of the Conference
SP - 1587
EP - 1597
BT - ACL-IJCNLP 2015 - 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Proceedings of the Conference
PB - Association for Computational Linguistics (ACL)
T2 - 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL-IJCNLP 2015
Y2 - 26 July 2015 through 31 July 2015
ER -