Three-dimensional convolutional neural networks for target classification with volumetric sonar data

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations


Efficient three-dimensional (3-d) convolutional neural networks (CNNs) are designed and trained for an underwater target classification task with volumetric synthetic aperture sonar (SAS) imagery. (The third dimension of the data represents depth into the sediment, thereby enabling the consideration of buried underwater objects.) The use of tiny networks containing relatively few parameters makes training with enormous input data volumes feasible even with modest computational power and limited computer memory. The promise of the approach is demonstrated for both buried and proud man-made objects present in real, measured SAS data cubes collected at aquatic sites by an experimental volumetric sonar system, called the Sediment Volume Search Sonar (SVSS). The classification performance of each 3-d CNN exhibits marked improvement over a prescreening detection algorithm alone, and the utility of an ensemble approach is also quantified. An analysis of the effective functionality of the learned networks is provided, with this also accompanied by figures showing example trained filters as well as intermediate representations of a data volume containing unexploded ordnance (UXO). The predictions of the 3-d CNN classifiers can provide valuable guidance for the efficient allocation of resources during real-world UXO remediation operations.

Original languageEnglish (US)
Article number070005
JournalProceedings of Meetings on Acoustics
Issue number1
StatePublished - Jun 20 2021
Event6th Underwater Acoustics Conference and Exhibition, UACE 2021 - Virtual, Online
Duration: Jun 20 2021Jun 25 2021

All Science Journal Classification (ASJC) codes

  • Acoustics and Ultrasonics


Dive into the research topics of 'Three-dimensional convolutional neural networks for target classification with volumetric sonar data'. Together they form a unique fingerprint.

Cite this