Interpretable deep learning under fire

Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, Ting Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

107 Scopus citations

Abstract

Providing explanations for deep neural network (DNN) models is crucial for their use in security-sensitive domains. A plethora of interpretation models have been proposed to help users understand the inner workings of DNNs: how does a DNN arrive at a specific decision for a given input? The improved interpretability is believed to offer a sense of security by involving human in the decision-making process. Yet, due to its data-driven nature, the interpretability itself is potentially susceptible to malicious manipulations, about which little is known thus far. Here we bridge this gap by conducting the first systematic study on the security of interpretable deep learning systems (IDLSes). We show that existing IDLSes are highly vulnerable to adversarial manipulations. Specifically, we present ADV2, a new class of attacks that generate adversarial inputs not only misleading target DNNs but also deceiving their coupled interpretation models. Through empirical evaluation against four major types of IDLSes on benchmark datasets and in security-critical applications (e.g., skin cancer diagnosis), we demonstrate that with ADV2 the adversary is able to arbitrarily designate an input's prediction and interpretation. Further, with both analytical and empirical evidence, we identify the prediction-interpretation gap as one root cause of this vulnerability - a DNN and its interpretation model are often misaligned, resulting in the possibility of exploiting both models simultaneously. Finally, we explore potential countermeasures against ADV2, including leveraging its low transferability and incorporating it in an adversarial training framework. Our findings shed light on designing and operating IDLSes in a more secure and informative fashion, leading to several promising research directions.

Original languageEnglish (US)
Title of host publicationProceedings of the 29th USENIX Security Symposium
PublisherUSENIX Association
Pages1659-1676
Number of pages18
ISBN (Electronic)9781939133175
StatePublished - 2020
Event29th USENIX Security Symposium - Virtual, Online
Duration: Aug 12 2020Aug 14 2020

Publication series

NameProceedings of the 29th USENIX Security Symposium

Conference

Conference29th USENIX Security Symposium
CityVirtual, Online
Period8/12/208/14/20

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Safety, Risk, Reliability and Quality

Fingerprint

Dive into the research topics of 'Interpretable deep learning under fire'. Together they form a unique fingerprint.

Cite this