Deep convolutional neural networks (CNNs) provide the sensing and detection community with a discriminative, machine learning based approach for classifying images of objects. However, one of the largest limitations for deep CNN image classifiers is the need for extensive training data for a variety of appearances of class objects. While current methods such as GAN data augmentation, noise-perturbation, and rotation or translation of images can allow CNNs to better associate convolved features to ones similar to a learned image class, many fail to provide new context of ground truth information associated with each object class. To expand the association of new convolved feature examples with image classes within CNN training datasets, we propose a feature learning and training data enhancement paradigm via a multi-sensor domain data augmentation algorithm. This algorithm uses a mutual information, merit-based feature selection subroutine to iteratively select SAR object features that most correlate to each sensor domain's class image objects. It then re-augments these features into the opposite sensor domain's feature set via a highest mutual information, cross sensor domain image concatenation function. This augmented set then acts to retrain the CNN to recognize new cross domain class object features that each respective sensor domain's network was not previously exposed to. Our experimental results using T60- class vs T70-class SAR object images from both the MSTAR and MGTD dataset repositories demonstrated an increase in classification accuracy from 88% and 61% to post-augmented cross-domain dataset training of 93.75% accuracy for the MSTAR, MGTD and subsequent fused datasets, respectively.