Direct observation assessment of milestones: Problems with reliability

Meghan Schott, Raashee Kedia, Susan B. Promes, Thomas Swoboda, Kevin O'Rourke, Walter Green, Rachel Liu, Brent Stansfield, Sally A. Santen

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Introduction: Emergency medicine (EM) milestones are used to assess residents' progress. While some milestone validity evidence exists, there is a lack of standardized tools available to reliably assess residents. Inherent to this is a concern that we may not be truly measuring what we intend to assess. The purpose of this study was to design a direct observation milestone assessment instrument supported by validity and reliability evidence. In addition, such a tool would further lend validity evidence to the EM milestones by demonstrating their accurate measurement. Methods: This was a multi-center, prospective, observational validity study conducted at eight institutions. The Critical Care Direct Observation Tool (CDOT) was created to assess EM residents during resuscitations. This tool was designed using a modified Delphi method focused on content, response process, and internal structure validity. Paying special attention to content validity, the CDOT was developed by an expert panel, maintaining the use of the EM milestone wording. We built response process and internal consistency by piloting and revising the instrument. Raters were faculty who routinely assess residents on the milestones. A brief training video on utilization of the instrument was completed by all. Raters used the CDOT to assess simulated videos of three residents at different stages of training in a critical care scenario. We measured reliability using Fleiss' kappa and interclass correlations. Results: Two versions of the CDOT were used: one used the milestone levels as global rating scales with anchors, and the second reflected a current trend of a checklist response system. Although the raters who used the CDOT routinely rate residents in their practice, they did not score the residents' performances in the videos comparably, which led to poor reliability. The Fleiss' kappa of each of the items measured on both versions of the CDOT was near zero. Conclusion: The validity and reliability of the current EM milestone assessment tools have yet to be determined. This study is a rigorous attempt to collect validity evidence in the development of a direct observation assessment instrument. However, despite strict attention to validity evidence, inter-rater reliability was low. The potential sources of reducible variance include rater-and instrument-based error. Based on this study, there may be concerns for the reliability of other EM milestone assessment tools that are currently in use.

Original languageEnglish (US)
Pages (from-to)871-876
Number of pages6
JournalWestern Journal of Emergency Medicine
Volume16
Issue number6
DOIs
StatePublished - Nov 2015

All Science Journal Classification (ASJC) codes

  • Emergency Medicine

Fingerprint

Dive into the research topics of 'Direct observation assessment of milestones: Problems with reliability'. Together they form a unique fingerprint.

Cite this