TY - JOUR
T1 - Application of multidomain sensor image fusion and training data augmentation for enhanced CNN image classification
AU - Arnous, Ferris I.
AU - Narayanan, Ram M.
AU - Li, Bing C.
N1 - Publisher Copyright:
© 2022 SPIE and IS&T.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Convolutional neural networks (CNNs) provide the sensing and detection community with a discriminative approach for classifying images. However, one of the largest limitations of deep CNN image classifiers is the need for extensive training datasets containing a variety of image representations. While current methods, such as generative adversarial network data augmentation, additions of noise, rotations, and translations, can allow CNNs to better associate new images and their feature representations to ones of a learned image class, many fail to provide new contexts of ground truth feature information. To expand the association of critical class features within CNN image training datasets, an image pairing and training dataset augmentation paradigm via a multi-sensor domain image data fusion algorithm is proposed. This algorithm uses a mutual information (MI) and merit-based feature selection subroutine to pair highly correlated cross-domain images from multiple sensor domain image datasets. It then re-augments the corresponding cross-domain image pairs into the opposite sensor domain's feature set via a highest MI, cross sensor domain, and image concatenation function. This augmented image set then acts to retrain the CNN to recognize greater generalizations of image class features via cross domain, mixed representations. Experimental results indicated an increased ability of CNNs to generalize and discriminate between image classes during testing of class images from synthetic aperture radar vehicle, solar cell device reliability screening, and lung cancer detection image datasets.
AB - Convolutional neural networks (CNNs) provide the sensing and detection community with a discriminative approach for classifying images. However, one of the largest limitations of deep CNN image classifiers is the need for extensive training datasets containing a variety of image representations. While current methods, such as generative adversarial network data augmentation, additions of noise, rotations, and translations, can allow CNNs to better associate new images and their feature representations to ones of a learned image class, many fail to provide new contexts of ground truth feature information. To expand the association of critical class features within CNN image training datasets, an image pairing and training dataset augmentation paradigm via a multi-sensor domain image data fusion algorithm is proposed. This algorithm uses a mutual information (MI) and merit-based feature selection subroutine to pair highly correlated cross-domain images from multiple sensor domain image datasets. It then re-augments the corresponding cross-domain image pairs into the opposite sensor domain's feature set via a highest MI, cross sensor domain, and image concatenation function. This augmented image set then acts to retrain the CNN to recognize greater generalizations of image class features via cross domain, mixed representations. Experimental results indicated an increased ability of CNNs to generalize and discriminate between image classes during testing of class images from synthetic aperture radar vehicle, solar cell device reliability screening, and lung cancer detection image datasets.
UR - http://www.scopus.com/inward/record.url?scp=85125697473&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85125697473&partnerID=8YFLogxK
U2 - 10.1117/1.JEI.31.1.013014
DO - 10.1117/1.JEI.31.1.013014
M3 - Article
AN - SCOPUS:85125697473
SN - 1017-9909
VL - 31
JO - Journal of Electronic Imaging
JF - Journal of Electronic Imaging
IS - 1
M1 - 013014
ER -