TY - GEN
T1 - Enablers of Adversarial Attacks in Machine Learning
AU - Izmailov, Rauf
AU - Sugrim, Shridatt
AU - Chadha, Ritu
AU - McDaniel, Patrick
AU - Swami, Ananthram
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - The proliferation of machine learning (ML) and artificial intelligence (AI) systems for military and security applications creates substantial challenges for designing and deploying such mechanisms that would learn, adapt, reason and act with Dinky, Dirty, Dynamic, Deceptive, Distributed (D5) data. While Dinky and Dirty challenges have been extensively explored in ML theory, the Dynamic challenge has been a persistent problem in ML applications (when the statistical distribution of training data differs from that of test data). The most recent Deceptive challenge is a malicious distribution shift between training and test data that amplifies the effects of the Dynamic challenge to the complete breakdown of the ML algorithms. Using the MNIST dataset as a simple calibration example, we explore the following two questions: (1) What geometric and statistical characteristics of data distribution can be exploited by an adversary with a given magnitude of the attack? (2) What counter-measures can be used to protect the constructed decision rule (at the cost of somewhat decreased performance) against malicious distribution shift within a given magnitude of the attack? While not offering a complete solution to the problem, we collect and interpret obtained observations in a way that provides practical guidance for making more adversary-resistant choices in the design of ML algorithms.
AB - The proliferation of machine learning (ML) and artificial intelligence (AI) systems for military and security applications creates substantial challenges for designing and deploying such mechanisms that would learn, adapt, reason and act with Dinky, Dirty, Dynamic, Deceptive, Distributed (D5) data. While Dinky and Dirty challenges have been extensively explored in ML theory, the Dynamic challenge has been a persistent problem in ML applications (when the statistical distribution of training data differs from that of test data). The most recent Deceptive challenge is a malicious distribution shift between training and test data that amplifies the effects of the Dynamic challenge to the complete breakdown of the ML algorithms. Using the MNIST dataset as a simple calibration example, we explore the following two questions: (1) What geometric and statistical characteristics of data distribution can be exploited by an adversary with a given magnitude of the attack? (2) What counter-measures can be used to protect the constructed decision rule (at the cost of somewhat decreased performance) against malicious distribution shift within a given magnitude of the attack? While not offering a complete solution to the problem, we collect and interpret obtained observations in a way that provides practical guidance for making more adversary-resistant choices in the design of ML algorithms.
UR - http://www.scopus.com/inward/record.url?scp=85061441115&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061441115&partnerID=8YFLogxK
U2 - 10.1109/MILCOM.2018.8599715
DO - 10.1109/MILCOM.2018.8599715
M3 - Conference contribution
AN - SCOPUS:85061441115
T3 - Proceedings - IEEE Military Communications Conference MILCOM
SP - 425
EP - 430
BT - 2018 IEEE Military Communications Conference, MILCOM 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE Military Communications Conference, MILCOM 2018
Y2 - 29 October 2018 through 31 October 2018
ER -