Evaluation of anesthesia residents using mannequin-based simulation: A multiinstitutional study

Howard A. Schwid, G. Alec Rooke, Jan Carline, Randolph H. Steadman, W. Bosseau Murray, Michael Olympio, Stephen Tarver, Karen Steckner, Susan Wetstone, Sawan AlHaddad, Judith Hass, J. Victor Ryckman, John Tetzlaf, Julie Tome, Jefrrey L. Lane, Andrew Stasic, Susan Baldwin, Arthur J L Schnieder, Clark Venable, Jody HenryPhilip R. Levin, Yue Ming Huang, Gregory Unruh, Rita Patel, William McIvor, Helene Finegold, Carole Cox, David S. Stern, Lindsey C. Henson, Ilya Shekhter, Brian K. Ross, Piotr Michalowski, Andrew Naklui-Cecchini, Michael Olympio, Sylvia Y. Dolonski, Margaret F. Brock, John A. Thomas, Ian Saunders, Kathleen Rosen, Elizabeth Sinz, John Barbaccia, William A. Kofke

Research output: Contribution to journalArticlepeer-review

162 Scopus citations


Background: Anesthesia simulators can generate reproducible, standardized clinical scenarios for instruction and evaluation purposes. Valid and reliable simulated scenarios and grading systems must be developed to use simulation for evaluation of anesthesia residents. Methods: After obtaining Human Subjects approval at each of the 10 participating institutions, 99 anesthesia residents consented to be videotaped during their management of four simulated scenarios on MedSim or METI mannequin-based anesthesia simulators. Using two different grading forms, two evaluators at each department independently reviewed the videotapes of the subjects from their institution to score the residents' performance. A third evaluator, at an outside institution, reviewed the videotape again. Statistical analysis was performed for construct- and criterion-related validity, internal consistency, interrater reliability, and intersimulator reliability. A single evaluator reviewed all videotapes a fourth time to determine the frequency of certain management errors. Results: Even advanced anesthesia residents nearing completion of their training made numerous management errors; however, construct-related validity of mannequin-based simulator assessment was supported by an overall improvement in simulator scores from CB and CA-1 to CA-2 and CA-3 levels of training. Subjects rated the simulator scenarios as realistic (3.47 out of possible 4), further supporting construct-related validity. Criterion-related validity was supported by moderate correlation of simulator scores with departmental faculty evaluations (0.37-0.41, P < 0.01), ABA written in-training scores (0.44-0.49, P < 0.01), and departmental mock oral board scores (0.44-0.47, P < 0.01). Reliability of the simulator assessment was demonstrated by very good internal consistency (α = 0.71-0.76) and excellent interrater reliability (correlation = 0.94-0.96; P < 0.01; κ = 0.81-0.90). There was no significant difference in METI versus MedSim scores for residents in the same year of training. Conclusions: Numerous management errors were identified in this study of anesthesia residents from 10 institutions. Further attention to these problems may benefit residency training since advanced residents continued to make these errors. Evaluation of anesthesia residents using mannequin-based simulators shows promise, adding a new dimension to current assessment methods. Further improvements are necessary in the simulation scenarios and grading criteria before mannequin-based simulation is used for accreditation purposes.

Original languageEnglish (US)
Pages (from-to)1434-1444
Number of pages11
Issue number6
StatePublished - Dec 1 2002

All Science Journal Classification (ASJC) codes

  • Anesthesiology and Pain Medicine


Dive into the research topics of 'Evaluation of anesthesia residents using mannequin-based simulation: A multiinstitutional study'. Together they form a unique fingerprint.

Cite this