TY - GEN
T1 - Morphable Convolutional Neural Network for Biomedical Image Segmentation
AU - Jiang, Huaipan
AU - Sarma, Anup
AU - Fan, Mengran
AU - Ryoo, Jihyun
AU - Arunachalam, Meenakshi
AU - Naveen, Sharada
AU - Kandemir, Mahmut T.
N1 - Funding Information:
This work is supported in part by NSF grants 1629915, 1629129, 1763681 and 2008398 as well as a grant from Intel.
Publisher Copyright:
© 2021 EDAA.
PY - 2021/2/1
Y1 - 2021/2/1
N2 - We propose a morphable convolution framework, which can be applied to irregularly shaped region of input feature map. This framework reduces the computational footprint of a regular CNN operation in the context of biomedical semantic image segmentation. The traditional CNN based approach has high accuracy, but suffers from high training and inference computation costs, compared to a conventional edge detection based approach. In this work, we combine the concept of morphable convolution with the edge detection algorithms resulting in a hierarchical framework, which first detects the edges and then generate a layer-wise annotation map. The annotation map guides the convolution operation to be run only on a small, useful fraction of pixels in the feature map. We evaluate our framework on three cell tracking datasets and the experimental results indicate that our framework saves 30% and 10% execution time on CPU and GPU, respectively, without loss of accuracy, compared to the baseline conventional CNN approaches.
AB - We propose a morphable convolution framework, which can be applied to irregularly shaped region of input feature map. This framework reduces the computational footprint of a regular CNN operation in the context of biomedical semantic image segmentation. The traditional CNN based approach has high accuracy, but suffers from high training and inference computation costs, compared to a conventional edge detection based approach. In this work, we combine the concept of morphable convolution with the edge detection algorithms resulting in a hierarchical framework, which first detects the edges and then generate a layer-wise annotation map. The annotation map guides the convolution operation to be run only on a small, useful fraction of pixels in the feature map. We evaluate our framework on three cell tracking datasets and the experimental results indicate that our framework saves 30% and 10% execution time on CPU and GPU, respectively, without loss of accuracy, compared to the baseline conventional CNN approaches.
UR - http://www.scopus.com/inward/record.url?scp=85111001060&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85111001060&partnerID=8YFLogxK
U2 - 10.23919/DATE51398.2021.9474153
DO - 10.23919/DATE51398.2021.9474153
M3 - Conference contribution
AN - SCOPUS:85111001060
T3 - Proceedings -Design, Automation and Test in Europe, DATE
SP - 1522
EP - 1525
BT - Proceedings of the 2021 Design, Automation and Test in Europe, DATE 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 Design, Automation and Test in Europe Conference and Exhibition, DATE 2021
Y2 - 1 February 2021 through 5 February 2021
ER -