TY - GEN
T1 - Multi-Class Micro-CT Image Segmentation Using Sparse Regularized Deep Networks
AU - Yazdani, Amirsaeed
AU - Sun, Yung Chen
AU - Stephens, Nicholas B.
AU - Ryan, Timothy
AU - Monga, Vishal
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11/1
Y1 - 2020/11/1
N2 - It is common in anthropology and paleontology to address questions about extant and extinct species through the quantification of osteological features observable in micro-computed tomographic (mCT) scans. In cases where remains were buried, the grey values present in these scans may be classified as belonging to air, dirt, or bone. While various intensity-based methods have been proposed to segment scans into these classes, it is often the case that intensity values for dirt and bone are nearly indistinguishable. In these instances, scientists resort to laborious manual segmentation, which does not scale well in practice when a large number of scans are to be analyzed. Here we present a new domain-enriched network for three-class image segmentation, which utilizes the domain knowledge of experts familiar with manually segmenting bone and dirt structures. More precisely, our novel structure consists of two components: 1) a representation network trained on special samples based on newly designed custom loss terms, which extracts discriminative bone and dirt features, 2) and a segmentation network that leverages these extracted discriminative features. These two parts are jointly trained in order to optimize the segmentation performance. A comparison of our network to that of the current state-of-the-art U-NETs demonstrates the benefits of our proposal, particularly when the number of labeled training images are limited, which is invariably the case for mCT segmentation.
AB - It is common in anthropology and paleontology to address questions about extant and extinct species through the quantification of osteological features observable in micro-computed tomographic (mCT) scans. In cases where remains were buried, the grey values present in these scans may be classified as belonging to air, dirt, or bone. While various intensity-based methods have been proposed to segment scans into these classes, it is often the case that intensity values for dirt and bone are nearly indistinguishable. In these instances, scientists resort to laborious manual segmentation, which does not scale well in practice when a large number of scans are to be analyzed. Here we present a new domain-enriched network for three-class image segmentation, which utilizes the domain knowledge of experts familiar with manually segmenting bone and dirt structures. More precisely, our novel structure consists of two components: 1) a representation network trained on special samples based on newly designed custom loss terms, which extracts discriminative bone and dirt features, 2) and a segmentation network that leverages these extracted discriminative features. These two parts are jointly trained in order to optimize the segmentation performance. A comparison of our network to that of the current state-of-the-art U-NETs demonstrates the benefits of our proposal, particularly when the number of labeled training images are limited, which is invariably the case for mCT segmentation.
UR - http://www.scopus.com/inward/record.url?scp=85107706006&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85107706006&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF51394.2020.9443322
DO - 10.1109/IEEECONF51394.2020.9443322
M3 - Conference contribution
AN - SCOPUS:85107706006
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 1553
EP - 1557
BT - Conference Record of the 54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020
Y2 - 1 November 2020 through 5 November 2020
ER -