Evaluating content selection in summarization: The pyramid method

Ani Nenkova, Rebecca Passonneau

Research output: Chapter in Book/Report/Conference proceedingConference contribution

411 Scopus citations

Abstract

We present an empirically grounded method for evaluating content selection in summarization. It incorporates the idea that no single best model summary for a collection of documents exists. Our method quantifies the relative importance of facts to be conveyed. We argue that it is reliable, predictive and diagnostic, thus improves considerably over the shortcomings of the human evaluation method currently used in the Document Understanding Conference.

Original languageEnglish (US)
Title of host publicationHLT-NAACL 2004 - Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Proceedings of the Main Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages145-152
Number of pages8
ISBN (Electronic)193243223X, 9781932432237
StatePublished - 2004
Event2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2004 - Boston, United States
Duration: May 2 2004May 7 2004

Publication series

NameHLT-NAACL 2004 - Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Proceedings of the Main Conference

Conference

Conference2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2004
Country/TerritoryUnited States
CityBoston
Period5/2/045/7/04

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Evaluating content selection in summarization: The pyramid method'. Together they form a unique fingerprint.

Cite this