TY - GEN
T1 - Adversarial examples for malware detection
AU - Grosse, Kathrin
AU - Papernot, Nicolas
AU - Manoharan, Praveen
AU - Backes, Michael
AU - McDaniel, Patrick
N1 - Publisher Copyright:
© 2017, Springer International Publishing AG.
PY - 2017
Y1 - 2017
N2 - Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor—yet carefully selected—perturbations. In this work, we expand on existing adversarial example crafting algorithms to construct a highly-effective attack that uses adversarial examples against malware detection models. To this end, we identify and overcome key challenges that prevent existing algorithms from being applied against malware detection: our approach operates in discrete and often binary input domains, whereas previous work operated only in continuous and differentiable domains. In addition, our technique guarantees the malware functionality of the adversarially manipulated program. In our evaluation, we train a neural network for malware detection on the DREBIN data set and achieve classification performance matching state-of-the-art from the literature. Using the augmented adversarial crafting algorithm we then manage to mislead this classifier for 63% of all malware samples. We also present a detailed evaluation of defensive mechanisms previously introduced in the computer vision contexts, including distillation and adversarial training, which show promising results.
AB - Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor—yet carefully selected—perturbations. In this work, we expand on existing adversarial example crafting algorithms to construct a highly-effective attack that uses adversarial examples against malware detection models. To this end, we identify and overcome key challenges that prevent existing algorithms from being applied against malware detection: our approach operates in discrete and often binary input domains, whereas previous work operated only in continuous and differentiable domains. In addition, our technique guarantees the malware functionality of the adversarially manipulated program. In our evaluation, we train a neural network for malware detection on the DREBIN data set and achieve classification performance matching state-of-the-art from the literature. Using the augmented adversarial crafting algorithm we then manage to mislead this classifier for 63% of all malware samples. We also present a detailed evaluation of defensive mechanisms previously introduced in the computer vision contexts, including distillation and adversarial training, which show promising results.
UR - http://www.scopus.com/inward/record.url?scp=85029485845&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85029485845&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-66399-9_4
DO - 10.1007/978-3-319-66399-9_4
M3 - Conference contribution
AN - SCOPUS:85029485845
SN - 9783319663982
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 62
EP - 79
BT - Computer Security – ESORICS 2017 - 22nd European Symposium on Research in Computer Security, Proceedings
A2 - Foley, Simon N.
A2 - Gollmann, Dieter
A2 - Snekkenes, Einar
PB - Springer Verlag
T2 - 22nd European Symposium on Research in Computer Security, ESORICS 2017
Y2 - 11 September 2017 through 15 September 2017
ER -