Abstract
Providing explanations for deep neural network (DNN) models is crucial for their use in security-sensitive domains. A plethora of interpretation models have been proposed to help users understand the inner workings of DNNs: how does a DNN arrive at a specific decision for a given input? The improved interpretability is believed to offer a sense of security by involving human in the decision-making process. Yet, due to its data-driven nature, the interpretability itself is potentially susceptible to malicious manipulations, about which little is known thus far. Here we bridge this gap by conducting the first systematic study on the security of interpretable deep learning systems (IDLSes). We show that existing IDLSes are highly vulnerable to adversarial manipulations. Specifically, we present ADV2, a new class of attacks that generate adversarial inputs not only misleading target DNNs but also deceiving their coupled interpretation models. Through empirical evaluation against four major types of IDLSes on benchmark datasets and in security-critical applications (e.g., skin cancer diagnosis), we demonstrate that with ADV2 the adversary is able to arbitrarily designate an input's prediction and interpretation. Further, with both analytical and empirical evidence, we identify the prediction-interpretation gap as one root cause of this vulnerability - a DNN and its interpretation model are often misaligned, resulting in the possibility of exploiting both models simultaneously. Finally, we explore potential countermeasures against ADV2, including leveraging its low transferability and incorporating it in an adversarial training framework. Our findings shed light on designing and operating IDLSes in a more secure and informative fashion, leading to several promising research directions.
| Original language | English (US) |
|---|---|
| Title of host publication | Proceedings of the 29th USENIX Security Symposium |
| Publisher | USENIX Association |
| Pages | 1659-1676 |
| Number of pages | 18 |
| ISBN (Electronic) | 9781939133175 |
| State | Published - 2020 |
| Event | 29th USENIX Security Symposium, USENIX Security 2020 - Virtual, Online Duration: Aug 12 2020 → Aug 14 2020 |
Publication series
| Name | Proceedings of the 29th USENIX Security Symposium |
|---|
Conference
| Conference | 29th USENIX Security Symposium, USENIX Security 2020 |
|---|---|
| City | Virtual, Online |
| Period | 8/12/20 → 8/14/20 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Information Systems
- Safety, Risk, Reliability and Quality
Fingerprint
Dive into the research topics of 'Interpretable deep learning under fire'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver