Revealing backdoors, post-training, in DNN classifiers via novel inference on optimized perturbations inducing group misclassification

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Scopus citations

Abstract

Recently, a special type of data poisoning (DP) attack against deep neural network (DNN) classifiers, known as a backdoor, was proposed. These attacks do not seek to degrade classification accuracy, but rather to have the classifier learn to classify to a target class whenever the backdoor pattern is present in a test example. Here, we address the challenging post-training detection of backdoor attacks in DNN image classifiers, wherein the defender does not have access to the poisoned training set, but only to the trained classifier itself, as well as to clean (unpoisoned) examples from the classification domain. We propose a defense against imperceptible backdoor attacks based on perturbation optimization and novel, robust detection inference. Our method detects whether the trained DNN has been backdoor-attacked and infers the source and target classes involved in an attack. It outperforms alternative defenses for several backdoor patterns, data sets, and attack settings.

Original languageEnglish (US)
Title of host publication2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3827-3831
Number of pages5
ISBN (Electronic)9781509066315
DOIs
StatePublished - May 2020
Event2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Barcelona, Spain
Duration: May 4 2020May 8 2020

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2020-May
ISSN (Print)1520-6149

Conference

Conference2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
Country/TerritorySpain
CityBarcelona
Period5/4/205/8/20

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Revealing backdoors, post-training, in DNN classifiers via novel inference on optimized perturbations inducing group misclassification'. Together they form a unique fingerprint.

Cite this