Peer grading in a MOOC: Reliability, validity, and perceived effects

Heng Luo, Anthony C. Robinson, Jae Young Park

Research output: Contribution to journalArticlepeer-review

108 Scopus citations

Abstract

Peer grading offers a scalable and sustainable way of providing assessment and feedback to a massive student population. However, currently there is little empirical evidence to support the credentials of peer grading as a learning assessment method in the MOOC context. To address this research need, this study examined 1,825 peer grading assignments collected from a Coursera MOOC with the purpose of investigating the reliability and validity of peer grading, as well as its perceived effects on students' MOOC learning experience. The empirical findings provide evidence that the aggregate of student graders can provide peer grading scores fairly consistent and highly similar to instructor grading scores. Student survey responses also indicate peer grading activities to be well received by a majority of MOOC students, who believe it was fair, useful, beneficial, and would recommend it to be included in future MOOC offerings. Based on the empirical results, this study concludes with a set of principles for designing and implementing peer grading activities in the MOOC context.

Original languageEnglish (US)
JournalJournal of Asynchronous Learning Network
Volume18
Issue number2
DOIs
StatePublished - 2014

All Science Journal Classification (ASJC) codes

  • Education
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Peer grading in a MOOC: Reliability, validity, and perceived effects'. Together they form a unique fingerprint.

Cite this