TY - GEN
T1 - A Backdoor Attack against 3D Point Cloud Classifiers
AU - Xiang, Zhen
AU - Miller, David J.
AU - Chen, Siheng
AU - Li, Xi
AU - Kesidis, George
N1 - Funding Information:
*Supported in part by an AFOSR DDDAS grant.
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Vulnerability of 3D point cloud (PC) classifiers has become a grave concern due to the popularity of 3D sensors in safety-critical applications. Existing adversarial attacks against 3D PC classifiers are all test-time evasion (TTE) attacks that aim to induce test-time misclassifications using knowledge of the classifier. But since the victim classifier is usually not accessible to the attacker, the threat is largely diminished in practice, as PC TTEs typically have poor transferability. Here, we propose the first backdoor attack (BA) against PC classifiers. Originally proposed for images, BAs poison the victim classifier's training set so that the classifier learns to decide to the attacker's target class whenever the attacker's backdoor pattern is present in a given input sample. Significantly, BAs do not require knowledge of the victim classifier. Different from image BAs, we propose to insert a cluster of points into a PC as a robust backdoor pattern customized for 3D PCs. Such clusters are also consistent with a physical attack (i.e., with a captured object in a scene). We optimize the cluster's location using an independently trained surrogate classifier and choose the cluster's local geometry to evade possible PC preprocessing and PC anomaly detectors (ADs). Experimentally, our BA achieves a uniformly high success rate (≥ 87%) and shows evasiveness against state-of-the-art PC ADs. Code is available at https://github.com/zhenxianglance/PCBA.
AB - Vulnerability of 3D point cloud (PC) classifiers has become a grave concern due to the popularity of 3D sensors in safety-critical applications. Existing adversarial attacks against 3D PC classifiers are all test-time evasion (TTE) attacks that aim to induce test-time misclassifications using knowledge of the classifier. But since the victim classifier is usually not accessible to the attacker, the threat is largely diminished in practice, as PC TTEs typically have poor transferability. Here, we propose the first backdoor attack (BA) against PC classifiers. Originally proposed for images, BAs poison the victim classifier's training set so that the classifier learns to decide to the attacker's target class whenever the attacker's backdoor pattern is present in a given input sample. Significantly, BAs do not require knowledge of the victim classifier. Different from image BAs, we propose to insert a cluster of points into a PC as a robust backdoor pattern customized for 3D PCs. Such clusters are also consistent with a physical attack (i.e., with a captured object in a scene). We optimize the cluster's location using an independently trained surrogate classifier and choose the cluster's local geometry to evade possible PC preprocessing and PC anomaly detectors (ADs). Experimentally, our BA achieves a uniformly high success rate (≥ 87%) and shows evasiveness against state-of-the-art PC ADs. Code is available at https://github.com/zhenxianglance/PCBA.
UR - http://www.scopus.com/inward/record.url?scp=85114911370&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85114911370&partnerID=8YFLogxK
U2 - 10.1109/ICCV48922.2021.00750
DO - 10.1109/ICCV48922.2021.00750
M3 - Conference contribution
AN - SCOPUS:85114911370
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 7577
EP - 7587
BT - Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
Y2 - 11 October 2021 through 17 October 2021
ER -