Abstract
The fundamental assumption underlying the use of 360-degree assessments is that ratings from different sources provide unique and meaningful information about the target manager's performance. Extant research appears to support this assumption by demonstrating low correlations between rating sources. This article reexamines the support of this assumption, suggesting that past research has been distorted by a statistical artifact-restriction of variance in job performance. This artifact reduces the amount of between-target variance in ratings and attenuates traditional correlation-based estimates of rating similarity. Results obtained from a Monte Carlo simulation and two field studies support this restriction of variance hypothesis. Noncorrelation-based methods of assessing interrater agreement indicated that agreement between sources was about as high as agreement within sources. Thus, different sources did not appear to be furnishing substantially unique information. The authors conclude by questioning common practices in 360-degree assessments and offering suggestions for future research and application.
Original language | English (US) |
---|---|
Pages (from-to) | 80-128 |
Number of pages | 49 |
Journal | Organizational Research Methods |
Volume | 6 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2003 |
All Science Journal Classification (ASJC) codes
- General Decision Sciences
- Strategy and Management
- Management of Technology and Innovation