TY - JOUR
T1 - Direct observation assessment of milestones
T2 - Problems with reliability
AU - Schott, Meghan
AU - Kedia, Raashee
AU - Promes, Susan B.
AU - Swoboda, Thomas
AU - O'Rourke, Kevin
AU - Green, Walter
AU - Liu, Rachel
AU - Stansfield, Brent
AU - Santen, Sally A.
N1 - Publisher Copyright:
© 2015 by the article author(s).
PY - 2015/11
Y1 - 2015/11
N2 - Introduction: Emergency medicine (EM) milestones are used to assess residents' progress. While some milestone validity evidence exists, there is a lack of standardized tools available to reliably assess residents. Inherent to this is a concern that we may not be truly measuring what we intend to assess. The purpose of this study was to design a direct observation milestone assessment instrument supported by validity and reliability evidence. In addition, such a tool would further lend validity evidence to the EM milestones by demonstrating their accurate measurement. Methods: This was a multi-center, prospective, observational validity study conducted at eight institutions. The Critical Care Direct Observation Tool (CDOT) was created to assess EM residents during resuscitations. This tool was designed using a modified Delphi method focused on content, response process, and internal structure validity. Paying special attention to content validity, the CDOT was developed by an expert panel, maintaining the use of the EM milestone wording. We built response process and internal consistency by piloting and revising the instrument. Raters were faculty who routinely assess residents on the milestones. A brief training video on utilization of the instrument was completed by all. Raters used the CDOT to assess simulated videos of three residents at different stages of training in a critical care scenario. We measured reliability using Fleiss' kappa and interclass correlations. Results: Two versions of the CDOT were used: one used the milestone levels as global rating scales with anchors, and the second reflected a current trend of a checklist response system. Although the raters who used the CDOT routinely rate residents in their practice, they did not score the residents' performances in the videos comparably, which led to poor reliability. The Fleiss' kappa of each of the items measured on both versions of the CDOT was near zero. Conclusion: The validity and reliability of the current EM milestone assessment tools have yet to be determined. This study is a rigorous attempt to collect validity evidence in the development of a direct observation assessment instrument. However, despite strict attention to validity evidence, inter-rater reliability was low. The potential sources of reducible variance include rater-and instrument-based error. Based on this study, there may be concerns for the reliability of other EM milestone assessment tools that are currently in use.
AB - Introduction: Emergency medicine (EM) milestones are used to assess residents' progress. While some milestone validity evidence exists, there is a lack of standardized tools available to reliably assess residents. Inherent to this is a concern that we may not be truly measuring what we intend to assess. The purpose of this study was to design a direct observation milestone assessment instrument supported by validity and reliability evidence. In addition, such a tool would further lend validity evidence to the EM milestones by demonstrating their accurate measurement. Methods: This was a multi-center, prospective, observational validity study conducted at eight institutions. The Critical Care Direct Observation Tool (CDOT) was created to assess EM residents during resuscitations. This tool was designed using a modified Delphi method focused on content, response process, and internal structure validity. Paying special attention to content validity, the CDOT was developed by an expert panel, maintaining the use of the EM milestone wording. We built response process and internal consistency by piloting and revising the instrument. Raters were faculty who routinely assess residents on the milestones. A brief training video on utilization of the instrument was completed by all. Raters used the CDOT to assess simulated videos of three residents at different stages of training in a critical care scenario. We measured reliability using Fleiss' kappa and interclass correlations. Results: Two versions of the CDOT were used: one used the milestone levels as global rating scales with anchors, and the second reflected a current trend of a checklist response system. Although the raters who used the CDOT routinely rate residents in their practice, they did not score the residents' performances in the videos comparably, which led to poor reliability. The Fleiss' kappa of each of the items measured on both versions of the CDOT was near zero. Conclusion: The validity and reliability of the current EM milestone assessment tools have yet to be determined. This study is a rigorous attempt to collect validity evidence in the development of a direct observation assessment instrument. However, despite strict attention to validity evidence, inter-rater reliability was low. The potential sources of reducible variance include rater-and instrument-based error. Based on this study, there may be concerns for the reliability of other EM milestone assessment tools that are currently in use.
UR - http://www.scopus.com/inward/record.url?scp=84947066488&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84947066488&partnerID=8YFLogxK
U2 - 10.5811/westjem.2015.9.27270
DO - 10.5811/westjem.2015.9.27270
M3 - Article
C2 - 26594281
AN - SCOPUS:84947066488
SN - 1936-900X
VL - 16
SP - 871
EP - 876
JO - Western Journal of Emergency Medicine
JF - Western Journal of Emergency Medicine
IS - 6
ER -