Multi-view SAS image classification using deep learning

David P. Williams, Samantha Dugelay

Research output: Chapter in Book/Report/Conference proceedingConference contribution

26 Scopus citations

Abstract

A new approach is proposed for multi-view classification when sonar data is in the form of imagery and each object has been viewed an arbitrary number of times. An image-fusion technique is employed in conjunction with a deep learning algorithm (based on Boltzmann machines) so that the sonar data from multiple views can be combined and exploited at the (earliest) image level. The method utilizes single-view imagery and, whenever available, multi-view fused imagery, in the same unified classification framework. The promise of the proposed approach is demonstrated in the context of an object classification task with real synthetic aperture sonar (SAS) imagery collected at sea.

Original languageEnglish (US)
Title of host publicationOCEANS 2016 MTS/IEEE Monterey, OCE 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781509015375
DOIs
StatePublished - Nov 28 2016
Event2016 OCEANS MTS/IEEE Monterey, OCE 2016 - Monterey, United States
Duration: Sep 19 2016Sep 23 2016

Publication series

NameOCEANS 2016 MTS/IEEE Monterey, OCE 2016

Other

Other2016 OCEANS MTS/IEEE Monterey, OCE 2016
Country/TerritoryUnited States
CityMonterey
Period9/19/169/23/16

All Science Journal Classification (ASJC) codes

  • Instrumentation
  • Oceanography
  • Ocean Engineering

Fingerprint

Dive into the research topics of 'Multi-view SAS image classification using deep learning'. Together they form a unique fingerprint.

Cite this