TY - GEN
T1 - Robust video-frame classification for bronchoscopy
AU - McTaggart, Matthew I.
AU - Higgins, William E.
N1 - Publisher Copyright:
© 2019 SPIE.
PY - 2019
Y1 - 2019
N2 - During bronchoscopy, a physician uses the endobronchial video to help navigate and observe the inner airways of a patient's lungs for lung cancer assessment. After the procedure is completed, the video typically contains a significant number of uninformative frames. A video frame is uninformative when it is too dark, too blurry, or indistinguishable due to a build-up of mucus, blood, or water within the airways. We develop a robust and automatic system, consisting of two distinct approaches, to classify each frame in an endobronchial video sequence as informative or uninformative. Our first approach, referred as the Classifier Approach, focuses on using image-processing techniques and a support vector machine, while our second approach, the Deep-Learning Approach, draws upon a convolutional neural network for video frame classification. Using the Classifier Approach, we achieved an accuracy of 78.8%, a sensitivity of 93.9%, and a specificity of 62.8%. The Deep-Learning Approach, gave slightly improved performance, with an accuracy of 87.3%, a sensitivity of 87.1%, and a specificity of 87.6%.
AB - During bronchoscopy, a physician uses the endobronchial video to help navigate and observe the inner airways of a patient's lungs for lung cancer assessment. After the procedure is completed, the video typically contains a significant number of uninformative frames. A video frame is uninformative when it is too dark, too blurry, or indistinguishable due to a build-up of mucus, blood, or water within the airways. We develop a robust and automatic system, consisting of two distinct approaches, to classify each frame in an endobronchial video sequence as informative or uninformative. Our first approach, referred as the Classifier Approach, focuses on using image-processing techniques and a support vector machine, while our second approach, the Deep-Learning Approach, draws upon a convolutional neural network for video frame classification. Using the Classifier Approach, we achieved an accuracy of 78.8%, a sensitivity of 93.9%, and a specificity of 62.8%. The Deep-Learning Approach, gave slightly improved performance, with an accuracy of 87.3%, a sensitivity of 87.1%, and a specificity of 87.6%.
UR - http://www.scopus.com/inward/record.url?scp=85068900586&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85068900586&partnerID=8YFLogxK
U2 - 10.1117/12.2507290
DO - 10.1117/12.2507290
M3 - Conference contribution
AN - SCOPUS:85068900586
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2019
A2 - Fei, Baowei
A2 - Linte, Cristian A.
PB - SPIE
T2 - Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling
Y2 - 17 February 2019 through 19 February 2019
ER -