TY - JOUR
T1 - Synthetic CT Generation of the Pelvis in Patients with Cervical Cancer
T2 - A Single Input Approach Using Generative Adversarial Network
AU - Baydoun, Atallah
AU - Xu, Ke
AU - Heo, Jin Uk
AU - Yang, Huan
AU - Zhou, Feifei
AU - Bethell, Latoya A.
AU - Fredman, Elisha T.
AU - Ellis, Rodney J.
AU - Podder, Tarun K.
AU - Traughber, Melanie S.
AU - Paspulati, Raj M.
AU - Qian, Pengjiang
AU - Traughber, Bryan J.
AU - Muzic, Raymond F.
N1 - Funding Information:
This work was supported in part by the National Cancer Institute of the National Institute of Health, USA, under Award R01CA196687, and in part by the YES Award through the Department of Radiology, School of Medicine, Case Western Reserve University, Cleveland, OH, USA, under Award R25CA221718.
Publisher Copyright:
© 2013 IEEE.
PY - 2021
Y1 - 2021
N2 - Multi-modality imaging constitutes a foundation of precision medicine, especially in oncology where reliable and rapid imaging techniques are needed in order to insure adequate diagnosis and treatment. In cervical cancer, precision oncology requires the acquisition of 18F-labelled 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET), magnetic resonance (MR), and computed tomography (CT) images. Thereafter, images are co-registered to derive electron density attributes required for FDG-PET attenuation correction and radiation therapy planning. Nevertheless, this traditional approach is subject to MR-CT registration defects, expands treatment expenses, and increases the patient's radiation exposure. To overcome these disadvantages, we propose a new framework for cross-modality image synthesis which we apply on MR-CT image translation for cervical cancer diagnosis and treatment. The framework is based on a conditional generative adversarial network (cGAN) and illustrates a novel tactic that addresses, simplistically but efficiently, the paradigm of vanishing gradient vs. feature extraction in deep learning. Its contributions are summarized as follows: 1) The approach-termed sU-cGAN- uses, for the first time, a shallow U-Net (sU-Net) with an encoder/decoder depth of 2 as generator; 2) sU-cGAN's input is the same MR sequence that is used for radiological diagnosis, i.e. T2-weighted, Turbo Spin Echo Single Shot (TSE-SSH) MR images; 3) Despite limited training data and a single input channel approach, sU-cGAN outperforms other state of the art deep learning methods and enables accurate synthetic CT (sCT) generation. In conclusion, the suggested framework should be studied further in the clinical settings. Moreover, the sU-Net model is worth exploring in other computer vision tasks.
AB - Multi-modality imaging constitutes a foundation of precision medicine, especially in oncology where reliable and rapid imaging techniques are needed in order to insure adequate diagnosis and treatment. In cervical cancer, precision oncology requires the acquisition of 18F-labelled 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET), magnetic resonance (MR), and computed tomography (CT) images. Thereafter, images are co-registered to derive electron density attributes required for FDG-PET attenuation correction and radiation therapy planning. Nevertheless, this traditional approach is subject to MR-CT registration defects, expands treatment expenses, and increases the patient's radiation exposure. To overcome these disadvantages, we propose a new framework for cross-modality image synthesis which we apply on MR-CT image translation for cervical cancer diagnosis and treatment. The framework is based on a conditional generative adversarial network (cGAN) and illustrates a novel tactic that addresses, simplistically but efficiently, the paradigm of vanishing gradient vs. feature extraction in deep learning. Its contributions are summarized as follows: 1) The approach-termed sU-cGAN- uses, for the first time, a shallow U-Net (sU-Net) with an encoder/decoder depth of 2 as generator; 2) sU-cGAN's input is the same MR sequence that is used for radiological diagnosis, i.e. T2-weighted, Turbo Spin Echo Single Shot (TSE-SSH) MR images; 3) Despite limited training data and a single input channel approach, sU-cGAN outperforms other state of the art deep learning methods and enables accurate synthetic CT (sCT) generation. In conclusion, the suggested framework should be studied further in the clinical settings. Moreover, the sU-Net model is worth exploring in other computer vision tasks.
UR - http://www.scopus.com/inward/record.url?scp=85099546726&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099546726&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2021.3049781
DO - 10.1109/ACCESS.2021.3049781
M3 - Article
AN - SCOPUS:85099546726
SN - 2169-3536
VL - 9
SP - 17208
EP - 17221
JO - IEEE Access
JF - IEEE Access
M1 - 9316666
ER -