An intriguing property of deep neural networks (DNNs) is their inherent vulnerability to adversarial inputs, which significantly hinder the application of DNNs in security-critical domains. Despite the plethora of work on adversarial attacks and defenses, many important questions regarding the inference behaviors of adversarial inputs remain mysterious. This work represents a solid step towards answering those questions by investigating the information flows of normal and adversarial inputs within various DNN models and conducting in-depth comparative analysis of their discriminative patterns. Our work points to several promising directions for designing more effective defense mechanisms.
|Number of pages
|Proceedings of the ACM Conference on Computer and Communications Security
|Published - 2018
|25th ACM Conference on Computer and Communications Security, CCS 2018 - Toronto, Canada
Duration: Oct 15 2018 → …
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications