TY - JOUR
T1 - DetectS ec
T2 - Evaluating the robustness of object detection models to adversarial attacks
AU - Du, Tianyu
AU - Ji, Shouling
AU - Wang, Bo
AU - He, Sirui
AU - Li, Jinfeng
AU - Li, Bo
AU - Wei, Tao
AU - Jia, Yunhan
AU - Beyah, Raheem
AU - Wang, Ting
N1 - Publisher Copyright:
© 2022 Wiley Periodicals LLC.
PY - 2022/9
Y1 - 2022/9
N2 - Despite their tremendous success in various machine learning tasks, deep neural networks (DNNs) are inherently vulnerable to adversarial examples, which are maliciously crafted inputs to cause DNNs to misbehave. Intensive research has been conducted on this phenomenon in simple tasks (e.g., image classification). However, little is known about this adversarial vulnerability for object detection, a much more complicated task, which often requires specialized DNNs and multiple additional components. In this paper, we present DetectSec, a uniform platform for robustness analysis of object detection models. Currently, DetectSec implements 13 representative adversarial attacks with 7 utility metrics and 13 defenses on 18 standard object detection models. Leveraging DetectSec, we conduct the first rigorous evaluation of adversarial attacks on the state-of-the-art object detection models. We analyze the impact of the factors including DNN architecture and capacity on the model robustness. We show that many conclusions about adversarial attacks and defenses in image classification tasks do not transfer to object detection tasks, for example, the targeted attack is stronger than the untargeted attack for two-stage detectors. Our findings will aid future efforts in understanding and defending against adversarial attacks in complicated tasks. In addition, we compare the robustness of different detection models and discuss their relative strengths and weaknesses. The platform DetectSec will be open source as a unique facility for further research on adversarial attacks and defenses in object detection tasks.
AB - Despite their tremendous success in various machine learning tasks, deep neural networks (DNNs) are inherently vulnerable to adversarial examples, which are maliciously crafted inputs to cause DNNs to misbehave. Intensive research has been conducted on this phenomenon in simple tasks (e.g., image classification). However, little is known about this adversarial vulnerability for object detection, a much more complicated task, which often requires specialized DNNs and multiple additional components. In this paper, we present DetectSec, a uniform platform for robustness analysis of object detection models. Currently, DetectSec implements 13 representative adversarial attacks with 7 utility metrics and 13 defenses on 18 standard object detection models. Leveraging DetectSec, we conduct the first rigorous evaluation of adversarial attacks on the state-of-the-art object detection models. We analyze the impact of the factors including DNN architecture and capacity on the model robustness. We show that many conclusions about adversarial attacks and defenses in image classification tasks do not transfer to object detection tasks, for example, the targeted attack is stronger than the untargeted attack for two-stage detectors. Our findings will aid future efforts in understanding and defending against adversarial attacks in complicated tasks. In addition, we compare the robustness of different detection models and discuss their relative strengths and weaknesses. The platform DetectSec will be open source as a unique facility for further research on adversarial attacks and defenses in object detection tasks.
UR - http://www.scopus.com/inward/record.url?scp=85124389749&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124389749&partnerID=8YFLogxK
U2 - 10.1002/int.22851
DO - 10.1002/int.22851
M3 - Article
AN - SCOPUS:85124389749
SN - 0884-8173
VL - 37
SP - 6463
EP - 6492
JO - International Journal of Intelligent Systems
JF - International Journal of Intelligent Systems
IS - 9
ER -