TY - GEN
T1 - CADE
T2 - 30th USENIX Security Symposium, USENIX Security 2021
AU - Yang, Limin
AU - Guo, Wenbo
AU - Hao, Qingying
AU - Ciptadi, Arridhana
AU - Ahmadzadeh, Ali
AU - Xing, Xinyu
AU - Wang, Gang
N1 - Publisher Copyright:
© 2021 by The USENIX Association. All rights reserved.
PY - 2021
Y1 - 2021
N2 - Concept drift poses a critical challenge to deploy machine learning models to solve practical security problems. Due to the dynamic behavior changes of attackers (and/or the benign counterparts), the testing data distribution is often shifting from the original training data over time, causing major failures to the deployed model. To combat concept drift, we present a novel system CADE aiming to 1) detect drifting samples that deviate from existing classes, and 2) provide explanations to reason the detected drift. Unlike traditional approaches (that require a large number of new labels to determine concept drift statistically), we aim to identify individual drifting samples as they arrive. Recognizing the challenges introduced by the high-dimensional outlier space, we propose to map the data samples into a low-dimensional space and automatically learn a distance function to measure the dissimilarity between samples. Using contrastive learning, we can take full advantage of existing labels in the training dataset to learn how to compare and contrast pairs of samples. To reason the meaning of the detected drift, we develop a distance-based explanation method. We show that explaining “distance” is much more effective than traditional methods that focus on explaining a “decision boundary” in this problem context. We evaluate CADE with two case studies: Android malware classification and network intrusion detection. We further work with a security company to test CADE on its malware database. Our results show that CADE can effectively detect drifting samples and provide semantically meaningful explanations.
AB - Concept drift poses a critical challenge to deploy machine learning models to solve practical security problems. Due to the dynamic behavior changes of attackers (and/or the benign counterparts), the testing data distribution is often shifting from the original training data over time, causing major failures to the deployed model. To combat concept drift, we present a novel system CADE aiming to 1) detect drifting samples that deviate from existing classes, and 2) provide explanations to reason the detected drift. Unlike traditional approaches (that require a large number of new labels to determine concept drift statistically), we aim to identify individual drifting samples as they arrive. Recognizing the challenges introduced by the high-dimensional outlier space, we propose to map the data samples into a low-dimensional space and automatically learn a distance function to measure the dissimilarity between samples. Using contrastive learning, we can take full advantage of existing labels in the training dataset to learn how to compare and contrast pairs of samples. To reason the meaning of the detected drift, we develop a distance-based explanation method. We show that explaining “distance” is much more effective than traditional methods that focus on explaining a “decision boundary” in this problem context. We evaluate CADE with two case studies: Android malware classification and network intrusion detection. We further work with a security company to test CADE on its malware database. Our results show that CADE can effectively detect drifting samples and provide semantically meaningful explanations.
UR - http://www.scopus.com/inward/record.url?scp=85112826522&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112826522&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85112826522
T3 - Proceedings of the 30th USENIX Security Symposium
SP - 2327
EP - 2344
BT - Proceedings of the 30th USENIX Security Symposium
PB - USENIX Association
Y2 - 11 August 2021 through 13 August 2021
ER -