TY - GEN
T1 - Multi-document abstractive summarization using ILP based multi-sentence compression
AU - Banerjee, Siddhartha
AU - Mitra, Prasenjit
AU - Sugiyama, Kazunari
PY - 2015
Y1 - 2015
N2 - Abstractive summarization is an ideal form of summarization since it can synthesize information from multiple documents to create concise informative summaries. In this work, we aim at developing an abstractive summarizer. First, our proposed approach identifies the most important document in the multi-document set. The sentences in the most important document are aligned to sentences in other documents to generate clusters of similar sentences. Second, we generate K-shortest paths from the sentences in each cluster using a word-graph structure. Finally, we select sentences from the set of shortest paths generated from all the clusters employing a novel integer linear programming (ILP) model with the objective of maximizing information content and readability of the final summary. Our ILP model represents the shortest paths as binary variables and considers the length of the path, information score and linguistic quality score in the objective function. Experimental results on the DUC 2004 and 2005 multi-document summarization datasets show that our proposed approach outperforms all the baselines and state-of-the-art extractive summarizers as measured by the ROUGE scores. Our method also outperforms a recent abstractive summarization technique. In manual evaluation, our approach also achieves promising results on informativeness and readability.
AB - Abstractive summarization is an ideal form of summarization since it can synthesize information from multiple documents to create concise informative summaries. In this work, we aim at developing an abstractive summarizer. First, our proposed approach identifies the most important document in the multi-document set. The sentences in the most important document are aligned to sentences in other documents to generate clusters of similar sentences. Second, we generate K-shortest paths from the sentences in each cluster using a word-graph structure. Finally, we select sentences from the set of shortest paths generated from all the clusters employing a novel integer linear programming (ILP) model with the objective of maximizing information content and readability of the final summary. Our ILP model represents the shortest paths as binary variables and considers the length of the path, information score and linguistic quality score in the objective function. Experimental results on the DUC 2004 and 2005 multi-document summarization datasets show that our proposed approach outperforms all the baselines and state-of-the-art extractive summarizers as measured by the ROUGE scores. Our method also outperforms a recent abstractive summarization technique. In manual evaluation, our approach also achieves promising results on informativeness and readability.
UR - http://www.scopus.com/inward/record.url?scp=84949778956&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84949778956&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84949778956
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 1208
EP - 1214
BT - IJCAI 2015 - Proceedings of the 24th International Joint Conference on Artificial Intelligence
A2 - Wooldridge, Michael
A2 - Yang, Qiang
PB - International Joint Conferences on Artificial Intelligence
T2 - 24th International Joint Conference on Artificial Intelligence, IJCAI 2015
Y2 - 25 July 2015 through 31 July 2015
ER -