TY - JOUR
T1 - Peer assessment in the digital age
T2 - a meta-analysis comparing peer and teacher ratings
AU - Li, Hongli
AU - Xiong, Yao
AU - Zang, Xiaojiao
AU - Kornhaber, Mindy L.
AU - Lyu, Youngsun
AU - Chung, Kyung Sun
AU - Suen, Hoi K.
PY - 2016/2/17
Y1 - 2016/2/17
N2 - Given the wide use of peer assessment, especially in higher education, the relative accuracy of peer ratings compared to teacher ratings is a major concern for both educators and researchers. This concern has grown with the increase of peer assessment in digital platforms. In this meta-analysis, using a variance-known hierarchical linear modelling approach, we synthesise findings from studies on peer assessment since 1999 when computer-assisted peer assessment started to proliferate. The estimated average Pearson correlation between peer and teacher ratings is found to be.63, which is moderately strong. This correlation is significantly higher when: (a) the peer assessment is paper-based rather than computer-assisted; (b) the subject area is not medical/clinical; (c) the course is graduate level rather than undergraduate or K-12; (d) individual work instead of group work is assessed; (e) the assessors and assessees are matched at random; (f) the peer assessment is voluntary instead of compulsory; (g) the peer assessment is non-anonymous; (h) peer raters provide both scores and qualitative comments instead of only scores; and (i) peer raters are involved in developing the rating criteria. The findings are expected to inform practitioners regarding peer assessment practices that are more likely to exhibit better agreement with teacher assessment.
AB - Given the wide use of peer assessment, especially in higher education, the relative accuracy of peer ratings compared to teacher ratings is a major concern for both educators and researchers. This concern has grown with the increase of peer assessment in digital platforms. In this meta-analysis, using a variance-known hierarchical linear modelling approach, we synthesise findings from studies on peer assessment since 1999 when computer-assisted peer assessment started to proliferate. The estimated average Pearson correlation between peer and teacher ratings is found to be.63, which is moderately strong. This correlation is significantly higher when: (a) the peer assessment is paper-based rather than computer-assisted; (b) the subject area is not medical/clinical; (c) the course is graduate level rather than undergraduate or K-12; (d) individual work instead of group work is assessed; (e) the assessors and assessees are matched at random; (f) the peer assessment is voluntary instead of compulsory; (g) the peer assessment is non-anonymous; (h) peer raters provide both scores and qualitative comments instead of only scores; and (i) peer raters are involved in developing the rating criteria. The findings are expected to inform practitioners regarding peer assessment practices that are more likely to exhibit better agreement with teacher assessment.
UR - http://www.scopus.com/inward/record.url?scp=84955200360&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84955200360&partnerID=8YFLogxK
U2 - 10.1080/02602938.2014.999746
DO - 10.1080/02602938.2014.999746
M3 - Article
AN - SCOPUS:84955200360
SN - 0260-2938
VL - 41
SP - 245
EP - 264
JO - Assessment and Evaluation in Higher Education
JF - Assessment and Evaluation in Higher Education
IS - 2
ER -