TY - JOUR
T1 - Accurate lungs segmentation on CT chest images by adaptive appearance-guided shape modeling
AU - Soliman, Ahmed
AU - Khalifa, Fahmi
AU - Elnakib, Ahmed
AU - El-Ghar, Mohamed Abou
AU - Dunlap, Neal
AU - Wang, Brian
AU - Gimel'farb, Georgy
AU - Keynton, Robert
AU - El-Baz, Ayman
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2017/1
Y1 - 2017/1
N2 - To accurately segment pathological and healthy lungs for reliable computer-aided disease diagnostics, a stack of chest CT scans is modeled as a sample of a spatially inhomogeneous joint 3D Markov-Gibbs random field (MGRF) of voxel-wise lung and chest CT image signals (intensities). The proposed learnable MGRF integrates two visual appearance sub-models with an adaptive lung shape submodel. The first-order appearance submodel accounts for both the original CT image and its Gaussian scale space (GSS) filtered version to specify local and global signal properties, respectively. Each empirical marginal probability distribution of signals is closely approximated with a linear combination of discrete Gaussians (LCDG), containing two positive dominant and multiple sign-alternate subordinate DGs. The approximation is separated into two LCDGs to describe individually the lungs and their background, i.e., all other chest tissues. The second-order appearance submodel quantifies conditional pairwise intensity dependencies in the nearest voxel 26-neighborhood in both the original and GSS-filtered images. The shape submodel is built for a set of training data and is adapted during segmentation using both the lung and chest appearances. The accuracy of the proposed segmentation framework is quantitatively assessed using two public databases (ISBI VESSEL12 challenge and MICCAI LOLA11 challenge) and our own database with, respectively, 20, 55, and 30 CT images of various lung pathologies acquired with different scanners and protocols. Quantitative assessment of our framework in terms of Dice similarity coefficients, 95-percentile bidirectional Hausdorff distances, and percentage volume differences confirms the high accuracy of our model on both our database (98.4±1.0%, 2.2±1.0 mm, 0.42±0.10%) and the VESSEL12 database (99.0±0.5%, 2.1±1.6 mm, 0.39±0.20%), respectively. Similarly, the accuracy of our approach is further verified via a blind evaluation by the organizers of the LOLA11 competition, where an average overlap of 98.0% with the expert's segmentation is yielded on all 55 subjects with our framework being ranked first among all the state-of-the-art techniques compared.
AB - To accurately segment pathological and healthy lungs for reliable computer-aided disease diagnostics, a stack of chest CT scans is modeled as a sample of a spatially inhomogeneous joint 3D Markov-Gibbs random field (MGRF) of voxel-wise lung and chest CT image signals (intensities). The proposed learnable MGRF integrates two visual appearance sub-models with an adaptive lung shape submodel. The first-order appearance submodel accounts for both the original CT image and its Gaussian scale space (GSS) filtered version to specify local and global signal properties, respectively. Each empirical marginal probability distribution of signals is closely approximated with a linear combination of discrete Gaussians (LCDG), containing two positive dominant and multiple sign-alternate subordinate DGs. The approximation is separated into two LCDGs to describe individually the lungs and their background, i.e., all other chest tissues. The second-order appearance submodel quantifies conditional pairwise intensity dependencies in the nearest voxel 26-neighborhood in both the original and GSS-filtered images. The shape submodel is built for a set of training data and is adapted during segmentation using both the lung and chest appearances. The accuracy of the proposed segmentation framework is quantitatively assessed using two public databases (ISBI VESSEL12 challenge and MICCAI LOLA11 challenge) and our own database with, respectively, 20, 55, and 30 CT images of various lung pathologies acquired with different scanners and protocols. Quantitative assessment of our framework in terms of Dice similarity coefficients, 95-percentile bidirectional Hausdorff distances, and percentage volume differences confirms the high accuracy of our model on both our database (98.4±1.0%, 2.2±1.0 mm, 0.42±0.10%) and the VESSEL12 database (99.0±0.5%, 2.1±1.6 mm, 0.39±0.20%), respectively. Similarly, the accuracy of our approach is further verified via a blind evaluation by the organizers of the LOLA11 competition, where an average overlap of 98.0% with the expert's segmentation is yielded on all 55 subjects with our framework being ranked first among all the state-of-the-art techniques compared.
UR - http://www.scopus.com/inward/record.url?scp=85013374999&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85013374999&partnerID=8YFLogxK
U2 - 10.1109/TMI.2016.2606370
DO - 10.1109/TMI.2016.2606370
M3 - Article
C2 - 27705854
AN - SCOPUS:85013374999
SN - 0278-0062
VL - 36
SP - 263
EP - 276
JO - IEEE transactions on medical imaging
JF - IEEE transactions on medical imaging
IS - 1
ER -