The Pyramid Method: Incorporating human content selection variation in summarization evaluation

Ani Nenkova, Rebecca Passonneau, Kathleen Mckeown

Research output: Contribution to journalArticlepeer-review

230 Scopus citations

Abstract

Human variation in content selection in summarization has given rise to some fundamental research questions: How can one incorporate the observed variation in suitable evaluation measures How can such measures reflect the fact that summaries conveying different content can be equally good and informative In this article, we address these very questions by proposing a method for analysis of multiple human abstracts into semantic content units. Such analysis allows us not only to quantify human variation in content selection, but also to assign empirical importance weight to different content units. It serves as the basis for an evaluation method, the Pyramid Method, that incorporates the observed variation and is predictive of different equally informative summaries. We discuss the reliability of content unit annotation, the properties of Pyramid scores, and their correlation with other evaluation methods.

Original languageEnglish (US)
Article number1233913
JournalACM Transactions on Speech and Language Processing
Volume4
Issue number2
DOIs
StatePublished - May 1 2007

All Science Journal Classification (ASJC) codes

  • Computer Science (miscellaneous)
  • Computational Mathematics

Fingerprint

Dive into the research topics of 'The Pyramid Method: Incorporating human content selection variation in summarization evaluation'. Together they form a unique fingerprint.

Cite this