Abstract
An intriguing property of deep neural networks (DNNs) is their inherent vulnerability to adversarial inputs, which significantly hinder the application of DNNs in security-critical domains. Despite the plethora of work on adversarial attacks and defenses, many important questions regarding the inference behaviors of adversarial inputs remain mysterious. This work represents a solid step towards answering those questions by investigating the information flows of normal and adversarial inputs within various DNN models and conducting in-depth comparative analysis of their discriminative patterns. Our work points to several promising directions for designing more effective defense mechanisms.
Original language | English (US) |
---|---|
Pages (from-to) | 2228-2230 |
Number of pages | 3 |
Journal | Proceedings of the ACM Conference on Computer and Communications Security |
Volume | 2018-January |
DOIs | |
State | Published - 2018 |
Event | 25th ACM Conference on Computer and Communications Security, CCS 2018 - Toronto, Canada Duration: Oct 15 2018 → … |
All Science Journal Classification (ASJC) codes
- Software
- Computer Networks and Communications