Abstract
Background: The fundamentals of laparoscopic surgery (FLS) program uses box trainers to develop laparoscopic skills. However, these simulators lack personalized training, real-time objective assessment, and primarily represent adult anatomies, neglecting pediatric cases. To address these limitations, advanced objective evaluations like motion analysis and eye-tracking are needed to track trainees’ progress and provide real-time formative feedback. However, dynamic training environments challenge eye-tracking data extraction due to shifting areas of interest (AOI). This study aimed to extract AOI-dependent and motion metrics for differentiating and predicting trainees’ skill levels across different box trainer anatomies. Method: Medical students and residents performed the peg transfer task on adult and pediatric box trainers. Computer Vision-Deep Learning (CV-DL) algorithms were integrated with eye-tracking data to automatically detect AOIs and extract AOI-dependent (fixation rates on objects and tools) and motion (tool speed) metrics. K-means clustering was used to differentiate trainees’ skill levels. To predict trainees’ visual behavior, we employed multiple Machine Learning (ML) techniques, including Random Forest, Support Vector Machine, Artificial Neural Networks, and Decision Trees. These methods were used to evaluate which technique could most accurately predict trainees’ visual attention patterns. Results: The extracted metrics successfully classified novices into High and Mid-Low skill levels, with significant differences in all extracted metrics between visual behavior levels (p < 0.05). Random Forest achieved the highest accuracy for visual behavior prediction, highlighting the importance of fixation rates on objects and tool speed as key predictors using Gini importance. Results showed consistency in novices’ visual attention between pediatric and adult box trainers (p > 0.05). Conclusion: The findings from this work are significant, indicating that novices' skill levels may differ even in their early-stage training, and extracted metrics have the potential to classify and predict novices’ skill levels and visual behavior. This is important for customizing and adapting trainees’ training programs to enhance their performance.
| Original language | English (US) |
|---|---|
| Journal | Surgical endoscopy |
| DOIs | |
| State | Accepted/In press - 2025 |
All Science Journal Classification (ASJC) codes
- Surgery
Fingerprint
Dive into the research topics of 'From gaze to proficiency: deep learning-driven prediction of novice performance in laparoscopic training using AOI-dependent metrics'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver