TY - JOUR
T1 - Uncertainty Quantification with Deep Learning through Variational Inference with applications to Synthetic Aperture Sonar
AU - Orescanin, Marko
AU - Harrington, Brian
AU - Olson, Derek
AU - Geilhufe, Marc
AU - Hansen, Roy E.
AU - Warakagoda, Narada
N1 - Publisher Copyright:
© 2023, International Association for Hydro-Environment Engineering and Research. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Deep learning (DL) has gained popularity within the active sonar community due to the ability to learn complex non-linear relationships between the input features and the labels through a data driven approach. The DL models have led to significant improvements in automatic target recognition and seafloor texture understanding with synthetic aperture sonar (SAS). Most of the DL models reported in literature are deterministic and do not provide estimates of uncertainty of their predictions limiting the utility for the downstream tasks such as ATR and change detection. In this work, we demonstrate the ability to quantify uncertainty in deep learning predictions by utilizing Bayesian Neural Networks, in this case via variational inference. We introduce and compare several state-of-the art Bayesian methods (including variational inference) on the task of classifying imaging artifacts in SAS. We conduct this on a novel dataset developed for this classification task through introduction of physical perturbations in the image formation stage, namely: 1) sound speed error of 40 m/s, 2) navigation error through perturbation in yaw of 0.35° and 3) Gaussian noise over the imaging channels prior to pulse compression (lowering the average image SNR to 5 dB). Overall, we demonstrate that our best model, a mean-field variational inference via flipout ResNet architecture, achieves 92% accuracy with calibrated uncertainty. By rejecting 10% of the data with highest uncertainty we achieve additional 4% improvement in accuracy.
AB - Deep learning (DL) has gained popularity within the active sonar community due to the ability to learn complex non-linear relationships between the input features and the labels through a data driven approach. The DL models have led to significant improvements in automatic target recognition and seafloor texture understanding with synthetic aperture sonar (SAS). Most of the DL models reported in literature are deterministic and do not provide estimates of uncertainty of their predictions limiting the utility for the downstream tasks such as ATR and change detection. In this work, we demonstrate the ability to quantify uncertainty in deep learning predictions by utilizing Bayesian Neural Networks, in this case via variational inference. We introduce and compare several state-of-the art Bayesian methods (including variational inference) on the task of classifying imaging artifacts in SAS. We conduct this on a novel dataset developed for this classification task through introduction of physical perturbations in the image formation stage, namely: 1) sound speed error of 40 m/s, 2) navigation error through perturbation in yaw of 0.35° and 3) Gaussian noise over the imaging channels prior to pulse compression (lowering the average image SNR to 5 dB). Overall, we demonstrate that our best model, a mean-field variational inference via flipout ResNet architecture, achieves 92% accuracy with calibrated uncertainty. By rejecting 10% of the data with highest uncertainty we achieve additional 4% improvement in accuracy.
UR - http://www.scopus.com/inward/record.url?scp=85178362798&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85178362798&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85178362798
SN - 2408-0195
SP - 371
EP - 378
JO - Underwater Acoustic Conference and Exhibition Series
JF - Underwater Acoustic Conference and Exhibition Series
T2 - 7th Underwater Acoustics Conference and Exhibition, UACE 2023
Y2 - 25 June 2023 through 30 June 2023
ER -