TY - GEN
T1 - Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
AU - Papernot, Nicolas
AU - McDaniel, Patrick
AU - Wu, Xi
AU - Jha, Somesh
AU - Swami, Ananthram
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/8/16
Y1 - 2016/8/16
N2 - Deep learning algorithms have been shown to perform extremely well on manyclassical machine learning problems. However, recent studies have shown thatdeep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force adeep neural network (DNN) to provide adversary-selected outputs. Such attackscan seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles canbe crashed, illicit or illegal content can bypass content filters, or biometricauthentication systems can be manipulated to allow improper access. In thiswork, we introduce a defensive mechanism called defensive distillationto reduce the effectiveness of adversarial samples on DNNs. We analyticallyinvestigate the generalizability and robustness properties granted by the useof defensive distillation when training DNNs. We also empirically study theeffectiveness of our defense mechanisms on two DNNs placed in adversarialsettings. The study shows that defensive distillation can reduce effectivenessof sample creation from 95% to less than 0.5% on a studied DNN. Such dramaticgains can be explained by the fact that distillation leads gradients used inadversarial sample creation to be reduced by a factor of 1030. We alsofind that distillation increases the average minimum number of features thatneed to be modified to create adversarial samples by about 800% on one of theDNNs we tested.
AB - Deep learning algorithms have been shown to perform extremely well on manyclassical machine learning problems. However, recent studies have shown thatdeep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force adeep neural network (DNN) to provide adversary-selected outputs. Such attackscan seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles canbe crashed, illicit or illegal content can bypass content filters, or biometricauthentication systems can be manipulated to allow improper access. In thiswork, we introduce a defensive mechanism called defensive distillationto reduce the effectiveness of adversarial samples on DNNs. We analyticallyinvestigate the generalizability and robustness properties granted by the useof defensive distillation when training DNNs. We also empirically study theeffectiveness of our defense mechanisms on two DNNs placed in adversarialsettings. The study shows that defensive distillation can reduce effectivenessof sample creation from 95% to less than 0.5% on a studied DNN. Such dramaticgains can be explained by the fact that distillation leads gradients used inadversarial sample creation to be reduced by a factor of 1030. We alsofind that distillation increases the average minimum number of features thatneed to be modified to create adversarial samples by about 800% on one of theDNNs we tested.
UR - http://www.scopus.com/inward/record.url?scp=84987680683&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84987680683&partnerID=8YFLogxK
U2 - 10.1109/SP.2016.41
DO - 10.1109/SP.2016.41
M3 - Conference contribution
AN - SCOPUS:84987680683
T3 - Proceedings - 2016 IEEE Symposium on Security and Privacy, SP 2016
SP - 582
EP - 597
BT - Proceedings - 2016 IEEE Symposium on Security and Privacy, SP 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2016 IEEE Symposium on Security and Privacy, SP 2016
Y2 - 23 May 2016 through 25 May 2016
ER -