TY - GEN
T1 - DYLE
T2 - 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022
AU - Mao, Ziming
AU - Wu, Chen Henry
AU - Ni, Ansong
AU - Zhang, Yusen
AU - Zhang, Rui
AU - Yu, Tao
AU - Deb, Budhaditya
AU - Zhu, Chenguang
AU - Awadallah, Ahmed H.
AU - Radev, Dragomir
N1 - Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Transformer-based models have achieved state-of-the-art performance on short-input summarization. However, they still struggle with summarizing longer text. In this paper, we present DYLE, a novel dynamic latent extraction approach for abstractive long-input summarization. DYLE jointly trains an extractor and a generator and treats the extracted text snippets as the latent variable, allowing dynamic snippet-level attention weights during decoding. To provide adequate supervision, we propose simple yet effective heuristics for oracle extraction as well as a consistency loss term, which encourages the extractor to approximate the averaged dynamic weights predicted by the generator. We evaluate our method on different long-document and long-dialogue summarization tasks: GovReport, QMSum, and arXiv. Experiment results show that DYLE outperforms all existing methods on GovReport and QMSum, with gains up to 6.1 ROUGE, while yielding strong results on arXiv. Further analysis shows that the proposed dynamic weights provide interpretability of our generation process.
AB - Transformer-based models have achieved state-of-the-art performance on short-input summarization. However, they still struggle with summarizing longer text. In this paper, we present DYLE, a novel dynamic latent extraction approach for abstractive long-input summarization. DYLE jointly trains an extractor and a generator and treats the extracted text snippets as the latent variable, allowing dynamic snippet-level attention weights during decoding. To provide adequate supervision, we propose simple yet effective heuristics for oracle extraction as well as a consistency loss term, which encourages the extractor to approximate the averaged dynamic weights predicted by the generator. We evaluate our method on different long-document and long-dialogue summarization tasks: GovReport, QMSum, and arXiv. Experiment results show that DYLE outperforms all existing methods on GovReport and QMSum, with gains up to 6.1 ROUGE, while yielding strong results on arXiv. Further analysis shows that the proposed dynamic weights provide interpretability of our generation process.
UR - http://www.scopus.com/inward/record.url?scp=85138908451&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138908451&partnerID=8YFLogxK
U2 - 10.18653/v1/2022.acl-long.118
DO - 10.18653/v1/2022.acl-long.118
M3 - Conference contribution
AN - SCOPUS:85138908451
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 1687
EP - 1698
BT - ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
A2 - Muresan, Smaranda
A2 - Nakov, Preslav
A2 - Villavicencio, Aline
PB - Association for Computational Linguistics (ACL)
Y2 - 22 May 2022 through 27 May 2022
ER -