TY - GEN
T1 - Crafting adversarial input sequences for recurrent neural networks
AU - Papernot, Nicolas
AU - McDaniel, Patrick
AU - Swami, Ananthram
AU - Harang, Richard
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/12/22
Y1 - 2016/12/22
N2 - Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.
AB - Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.
UR - http://www.scopus.com/inward/record.url?scp=85011845631&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85011845631&partnerID=8YFLogxK
U2 - 10.1109/MILCOM.2016.7795300
DO - 10.1109/MILCOM.2016.7795300
M3 - Conference contribution
AN - SCOPUS:85011845631
T3 - Proceedings - IEEE Military Communications Conference MILCOM
SP - 49
EP - 54
BT - MILCOM 2016 - 2016 IEEE Military Communications Conference
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 35th IEEE Military Communications Conference, MILCOM 2016
Y2 - 1 November 2016 through 3 November 2016
ER -