Improving robustness of deep neural networks via large-difference transformation

Longwei Wang, Chengfei Wang, Yupeng Li, Rui Wang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Recent research shows that previous model-agnostic methods that transform the input images before feeding them into the classifiers fail to defend against the adversarial examples. We assume that the small-difference transformations commonly used are the blame and therefore propose a new model-agnostic defense using a large-difference transformation. Specifically, we try to apply the novel primitive-based transformation that re-builds the input images by primitives of colorful triangles. In terms of the distortions required to completely break the defenses, our experiments on the ImageNet subset show that significantly large distortions (0.12) are needed to break the defense compared to other state-of-the-art model-agnostic defenses (0.05–0.06) under the strong attack method Backward Pass Differentiable Approximation (BPDA). This finding indicates that large difference transformation can improve the adversarial robustness, suggesting a promising new direction towards solving the challenge of adversarial robustness.

Original languageEnglish (US)
Pages (from-to)411-419
Number of pages9
JournalNeurocomputing
Volume450
DOIs
StatePublished - Aug 25 2021

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Cite this