TY - GEN
T1 - A new 3D automatic segmentation framework for accurate extraction of prostate from diffusion imaging
AU - Firjani, A.
AU - Elnakib, A.
AU - Khalifa, F.
AU - Gimel'Farb, G.
AU - El-Ghar, M. Abo
AU - Elmaghraby, A.
AU - El-Baz, A.
PY - 2011
Y1 - 2011
N2 - Prostate segmentation is an essential step in developing any non-invasive Computer-Assisted Diagnostic (CAD) system for the early diagnosis of prostate cancer using Magnetic Resonance Images (MRI). In this paper, a novel framework for 3D segmentation of the prostate region from Diffusion-Weighted Magnetic Resonance Imaging (DW-MRI) is proposed. The framework is based on a Maximum the Posteriori (MAP) estimate of a new log-likelihood function that consists of three descriptors: (i) 1st-order visual appearance descriptors of the Diffusion-MRI, (ii) a 3D spatially rotation-variant 2nd-order homogeneity descriptor, and (iii) a 3D prostate shape descriptor. The shape prior is learned from the co-aligned 3D segmented prostate Diffusion-MRI data. The visual appearances of the object and its background are described with marginal gray-level distributions obtained by separating their mixture over prostate data. The spatial interactions between the prostate voxels are modeled by a 3D 2nd-order rotation-variant Markov-Gibbs Random Field (MGRF) of object/background labels with analytically estimated potentials. Experiments with real in vivo prostate Diffusion-MRI confirm the robustness and accuracy of the proposed approach.
AB - Prostate segmentation is an essential step in developing any non-invasive Computer-Assisted Diagnostic (CAD) system for the early diagnosis of prostate cancer using Magnetic Resonance Images (MRI). In this paper, a novel framework for 3D segmentation of the prostate region from Diffusion-Weighted Magnetic Resonance Imaging (DW-MRI) is proposed. The framework is based on a Maximum the Posteriori (MAP) estimate of a new log-likelihood function that consists of three descriptors: (i) 1st-order visual appearance descriptors of the Diffusion-MRI, (ii) a 3D spatially rotation-variant 2nd-order homogeneity descriptor, and (iii) a 3D prostate shape descriptor. The shape prior is learned from the co-aligned 3D segmented prostate Diffusion-MRI data. The visual appearances of the object and its background are described with marginal gray-level distributions obtained by separating their mixture over prostate data. The spatial interactions between the prostate voxels are modeled by a 3D 2nd-order rotation-variant Markov-Gibbs Random Field (MGRF) of object/background labels with analytically estimated potentials. Experiments with real in vivo prostate Diffusion-MRI confirm the robustness and accuracy of the proposed approach.
UR - http://www.scopus.com/inward/record.url?scp=79959883402&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79959883402&partnerID=8YFLogxK
U2 - 10.1109/BSEC.2011.5872329
DO - 10.1109/BSEC.2011.5872329
M3 - Conference contribution
AN - SCOPUS:79959883402
SN - 9781612844107
T3 - Proceedings of the 2011 Biomedical Sciences and Engineering Conference: Image Informatics and Analytics in Biomedicine, BSEC 2011
BT - Proceedings of the 2011 Biomedical Sciences and Engineering Conference
T2 - 2011 Biomedical Sciences and Engineering Conference: Image Informatics and Analytics in Biomedicine, BSEC 2011
Y2 - 15 March 2011 through 17 March 2011
ER -