TY - GEN
T1 - Practical black-box attacks against machine learning
AU - Papernot, Nicolas
AU - McDaniel, Patrick
AU - Goodfellow, Ian
AU - Jha, Somesh
AU - Celik, Z. Berkay
AU - Swami, Ananthram
N1 - Publisher Copyright:
© 2017 ACM.
PY - 2017/4/2
Y1 - 2017/4/2
N2 - Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassi fied by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
AB - Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassi fied by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
UR - http://www.scopus.com/inward/record.url?scp=85021992078&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85021992078&partnerID=8YFLogxK
U2 - 10.1145/3052973.3053009
DO - 10.1145/3052973.3053009
M3 - Conference contribution
AN - SCOPUS:85021992078
T3 - ASIA CCS 2017 - Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security
SP - 506
EP - 519
BT - ASIA CCS 2017 - Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security
PB - Association for Computing Machinery, Inc
T2 - 2017 ACM Asia Conference on Computer and Communications Security, ASIA CCS 2017
Y2 - 2 April 2017 through 6 April 2017
ER -