Abstract
The percentage agreement index has been and continues to be a popular measure of interobserver reliability in applied behavior analysis and child development, as well as in other fields in which behavioral observation techniques are used. An algebraic method and a linear programming method were used to assess chance-corrected reliabilities for a sample of past observations in which the percentage agreement index was used. The results indicated that, had kappa been used instead of percentage agreement, between one-fourth and three-fourth of the reported observations could be judged as unreliable against a lenient criterion and between one-half and three-fourths could be judged as unreliable against a more stringent criterion. It is suggested that the continued use of the percentage agreement index has seriously undermined the reliabilities of past observations and can no longer be justified in future studies.
Original language | English (US) |
---|---|
Pages (from-to) | 221-234 |
Number of pages | 14 |
Journal | Journal of Psychopathology and Behavioral Assessment |
Volume | 7 |
Issue number | 3 |
DOIs | |
State | Published - Sep 1985 |
All Science Journal Classification (ASJC) codes
- Clinical Psychology