The Restriction of Variance Hypothesis and Interrater Reliability and Agreement: Are Ratings from Multiple Sources Really Dissimilar?

James M. Lebreton, Jennifer R.D. Burgess, Robert B. Kaiser, E. Kate Atchley, Lawrence R. James

Research output: Contribution to journalArticlepeer-review

196 Scopus citations

Abstract

The fundamental assumption underlying the use of 360-degree assessments is that ratings from different sources provide unique and meaningful information about the target manager's performance. Extant research appears to support this assumption by demonstrating low correlations between rating sources. This article reexamines the support of this assumption, suggesting that past research has been distorted by a statistical artifact-restriction of variance in job performance. This artifact reduces the amount of between-target variance in ratings and attenuates traditional correlation-based estimates of rating similarity. Results obtained from a Monte Carlo simulation and two field studies support this restriction of variance hypothesis. Noncorrelation-based methods of assessing interrater agreement indicated that agreement between sources was about as high as agreement within sources. Thus, different sources did not appear to be furnishing substantially unique information. The authors conclude by questioning common practices in 360-degree assessments and offering suggestions for future research and application.

Original languageEnglish (US)
Pages (from-to)80-128
Number of pages49
JournalOrganizational Research Methods
Volume6
Issue number1
DOIs
StatePublished - Jan 2003

All Science Journal Classification (ASJC) codes

  • General Decision Sciences
  • Strategy and Management
  • Management of Technology and Innovation

Fingerprint

Dive into the research topics of 'The Restriction of Variance Hypothesis and Interrater Reliability and Agreement: Are Ratings from Multiple Sources Really Dissimilar?'. Together they form a unique fingerprint.

Cite this