TY - GEN
T1 - Towards General Robustness Verification of MaxPool-Based Convolutional Neural Networks via Tightening Linear Approximation
AU - Xiao, Yuan
AU - Ma, Shiqing
AU - Zhai, Juan
AU - Fang, Chunrong
AU - Jia, Jinyuan
AU - Chen, Zhenyu
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The robustness of convolutional neural networks (CNNs) is vital to modern AI-driven systems. It can be quanti-fied by formal verification by providing a certified lower bound, within which any perturbation does not alter the original input's classification result. It is challenging due to nonlinear components, such as MaxPool. At present, many verification methods are sound but risk losing some precision to enhance efficiency and scalability, and thus, a certified lower bound is a crucial criterion for evaluating the performance of verification tools. In this paper, we present MaxLin, a robustness verifier for MaxPool-based CNNs with tight Linear approximation. By tight-ening the linear approximation of the MaxPool function, we can certify larger certified lower bounds of CNNs. We evaluate MaxLin with open-sourced benchmarks, including LeNet and networks trained on the MNIST, CIFAR-10, and Tiny ImageNet datasets. The results show that MaxLin outperforms state-of-the-art tools with up to 110.60% improvement regarding the certified lower bound and 5.13 × speedup for the same neural networks. Our code is available at https://github.com/xiaoyuanpigo/maxlin.
AB - The robustness of convolutional neural networks (CNNs) is vital to modern AI-driven systems. It can be quanti-fied by formal verification by providing a certified lower bound, within which any perturbation does not alter the original input's classification result. It is challenging due to nonlinear components, such as MaxPool. At present, many verification methods are sound but risk losing some precision to enhance efficiency and scalability, and thus, a certified lower bound is a crucial criterion for evaluating the performance of verification tools. In this paper, we present MaxLin, a robustness verifier for MaxPool-based CNNs with tight Linear approximation. By tight-ening the linear approximation of the MaxPool function, we can certify larger certified lower bounds of CNNs. We evaluate MaxLin with open-sourced benchmarks, including LeNet and networks trained on the MNIST, CIFAR-10, and Tiny ImageNet datasets. The results show that MaxLin outperforms state-of-the-art tools with up to 110.60% improvement regarding the certified lower bound and 5.13 × speedup for the same neural networks. Our code is available at https://github.com/xiaoyuanpigo/maxlin.
UR - http://www.scopus.com/inward/record.url?scp=85207244686&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85207244686&partnerID=8YFLogxK
U2 - 10.1109/CVPR52733.2024.02339
DO - 10.1109/CVPR52733.2024.02339
M3 - Conference contribution
AN - SCOPUS:85207244686
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 24766
EP - 24775
BT - Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
PB - IEEE Computer Society
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
Y2 - 16 June 2024 through 22 June 2024
ER -