Abstract
Purpose: We examined the performance of artificial intelligence chatbots on the PREview Practice Exam, an online situational judgment test for professionalism and ethics. Methods: We used validated methodologies to calculate scores and descriptive statistics, χ2 tests, and Fisher’s exact tests to compare scores by model and competency. Results: GPT-3.5 and GPT-4 scored 6/9 (76th percentile) and 7/9 (92nd percentile), respectively, higher than medical school applicant averages of 5/9 (56th percentile). Both models answered 95 + % of questions correctly. Conclusions: Chatbots outperformed the average applicant on PREview, suggesting their potential for healthcare training and decision-making and highlighting risks of online assessment delivery.
Original language | English (US) |
---|---|
Pages (from-to) | 331-333 |
Number of pages | 3 |
Journal | Medical Science Educator |
Volume | 34 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2024 |
All Science Journal Classification (ASJC) codes
- Medicine (miscellaneous)
- Education