TY - GEN
T1 - In vivo demonstration of reflection artifact reduction in LED-based photoacoustic imaging using deep learning
AU - Agrawal, Sumit
AU - Johnstonbaugh, Kerrick
AU - Suresh, Thaarakh
AU - Garikipati, Ankit
AU - Kuniyil Ajith Singh, Mithun
AU - Karri, Sri Phani Krishna
AU - Kothapalli, Sri Rajasekhar
N1 - Funding Information:
We acknowledge the funding for this research from the NIH-NIBIB R00EB017729-05 (SRK), R21EB030370-01 (SRK), Penn State Cancer Institute startup funds (SRK), Leighton Riess Graduate Fellowship (SA), and the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU.
Publisher Copyright:
© COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.
PY - 2021
Y1 - 2021
N2 - Reflection of photoacoustic (PA) signals from strong acoustic heterogeneities in biological tissue leads to reflection artifacts (RAs) in B-mode PA images. In practice, RAs often clutter clinically obtained PA images, making the interpretation of these images difficult in the presence of hypoechoic or anechoic biological structures. Towards PA artifact removal, several researchers have exploited 1) the frequency/spectrum content of time-series photoacoustic data in order to separate the true signal from artifacts, and 2) the multi-wavelength response of photoacoustic targets, assuming that the spectral nature of RAs correlates well with their corresponding source signals. These approaches are limited to extensive offline processing and sometimes fail to correctly identify artifacts in deep tissue. This study demonstrates the use of a deep neural network with the U-Net architecture to detect and reduce RAs in B-mode PA images. In order to train the proposed deep learning model for the RA reduction task, a program is designed to randomly generate anatomically realistic digital phantoms of human fingers with the capacity to produce RAs when subjected to PA imaging. In-silico PA imaging experiments, modeling photon transport and acoustic wave propagation, on these digital finger phantoms enabled the generation of 1800 training samples. The algorithm was tested on both PA images generated from digital phantoms and in-vivo PA data acquired from human fingers using a hand-held LED-based PA imaging system. Our results suggest that robust reduction of RAs with a deep neural network is possible if the network is trained with sufficiently realistic simulated images.
AB - Reflection of photoacoustic (PA) signals from strong acoustic heterogeneities in biological tissue leads to reflection artifacts (RAs) in B-mode PA images. In practice, RAs often clutter clinically obtained PA images, making the interpretation of these images difficult in the presence of hypoechoic or anechoic biological structures. Towards PA artifact removal, several researchers have exploited 1) the frequency/spectrum content of time-series photoacoustic data in order to separate the true signal from artifacts, and 2) the multi-wavelength response of photoacoustic targets, assuming that the spectral nature of RAs correlates well with their corresponding source signals. These approaches are limited to extensive offline processing and sometimes fail to correctly identify artifacts in deep tissue. This study demonstrates the use of a deep neural network with the U-Net architecture to detect and reduce RAs in B-mode PA images. In order to train the proposed deep learning model for the RA reduction task, a program is designed to randomly generate anatomically realistic digital phantoms of human fingers with the capacity to produce RAs when subjected to PA imaging. In-silico PA imaging experiments, modeling photon transport and acoustic wave propagation, on these digital finger phantoms enabled the generation of 1800 training samples. The algorithm was tested on both PA images generated from digital phantoms and in-vivo PA data acquired from human fingers using a hand-held LED-based PA imaging system. Our results suggest that robust reduction of RAs with a deep neural network is possible if the network is trained with sufficiently realistic simulated images.
UR - http://www.scopus.com/inward/record.url?scp=85109068186&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85109068186&partnerID=8YFLogxK
U2 - 10.1117/12.2579082
DO - 10.1117/12.2579082
M3 - Conference contribution
AN - SCOPUS:85109068186
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Photons Plus Ultrasound
A2 - Oraevsky, Alexander A.
A2 - Wang, Lihong V.
PB - SPIE
T2 - Photons Plus Ultrasound: Imaging and Sensing 2021
Y2 - 6 March 2021 through 11 March 2021
ER -