TY - GEN
T1 - Robustness Analysis of CNN-based Malware Family Classification Methods Against Various Adversarial Attacks
AU - Choi, Seok Hwan
AU - Shin, Jin Myeong
AU - Liu, Peng
AU - Choi, Yoon Ho
N1 - Funding Information:
This work was also supported by basic science research program through national research foundation korea (NRF) funded by the ministry of science, ICT and future planning (NRF-2018R1D1A3B07043392) and the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2019-2014-1-00743) supervised by the IITP(Institute for Information & communications Technology Planning & Evaluation).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - As malware family classification methods, image-based classification methods have attracted much attention. Especially, due to the fast classification speed and the high classification accuracy, Convolutional Neural Network (CNN)-based malware family classification methods have been studied. However, previous studies on CNN-based classification methods focused only on improving the classification accuracy of malware families. That is, previous studies did not consider the cases that the accuracy of CNN-based malware classification methods can be decreased under the existence of adversarial attacks. In this paper, we analyze the robustness of various CNN-based malware family classification models under adversarial attacks. While adding imperceptible non-random perturbations to the input image, we measured how the accuracy of the CNN-based malware family classification model can be affected. Also, we showed the influence of three significant visualization parameters(i.e., the size of input image, dimension of input image, and conversion color of a special character)on the accuracy variation under adversarial attacks. From the evaluation results using the Microsoft malware dataset, we showed that even the accuracy over 98% of the CNN-based malware family classification method can be decreased to less than 7%.
AB - As malware family classification methods, image-based classification methods have attracted much attention. Especially, due to the fast classification speed and the high classification accuracy, Convolutional Neural Network (CNN)-based malware family classification methods have been studied. However, previous studies on CNN-based classification methods focused only on improving the classification accuracy of malware families. That is, previous studies did not consider the cases that the accuracy of CNN-based malware classification methods can be decreased under the existence of adversarial attacks. In this paper, we analyze the robustness of various CNN-based malware family classification models under adversarial attacks. While adding imperceptible non-random perturbations to the input image, we measured how the accuracy of the CNN-based malware family classification model can be affected. Also, we showed the influence of three significant visualization parameters(i.e., the size of input image, dimension of input image, and conversion color of a special character)on the accuracy variation under adversarial attacks. From the evaluation results using the Microsoft malware dataset, we showed that even the accuracy over 98% of the CNN-based malware family classification method can be decreased to less than 7%.
UR - http://www.scopus.com/inward/record.url?scp=85071715990&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071715990&partnerID=8YFLogxK
U2 - 10.1109/CNS.2019.8802809
DO - 10.1109/CNS.2019.8802809
M3 - Conference contribution
AN - SCOPUS:85071715990
T3 - 2019 IEEE Conference on Communications and Network Security, CNS 2019
BT - 2019 IEEE Conference on Communications and Network Security, CNS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE Conference on Communications and Network Security, CNS 2019
Y2 - 10 June 2019 through 12 June 2019
ER -