Effects of the use of percentage agreement on behavioral observation reliabilities: A reassessment

Hoi K. Suen, Patrick S.C. Lee

Research output: Contribution to journalArticlepeer-review

23 Scopus citations

Abstract

The percentage agreement index has been and continues to be a popular measure of interobserver reliability in applied behavior analysis and child development, as well as in other fields in which behavioral observation techniques are used. An algebraic method and a linear programming method were used to assess chance-corrected reliabilities for a sample of past observations in which the percentage agreement index was used. The results indicated that, had kappa been used instead of percentage agreement, between one-fourth and three-fourth of the reported observations could be judged as unreliable against a lenient criterion and between one-half and three-fourths could be judged as unreliable against a more stringent criterion. It is suggested that the continued use of the percentage agreement index has seriously undermined the reliabilities of past observations and can no longer be justified in future studies.

Original languageEnglish (US)
Pages (from-to)221-234
Number of pages14
JournalJournal of Psychopathology and Behavioral Assessment
Volume7
Issue number3
DOIs
StatePublished - Sep 1985

All Science Journal Classification (ASJC) codes

  • Clinical Psychology

Fingerprint

Dive into the research topics of 'Effects of the use of percentage agreement on behavioral observation reliabilities: A reassessment'. Together they form a unique fingerprint.

Cite this