TY - GEN
T1 - LibSteal
T2 - 18th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2023
AU - Zhang, Jinquan
AU - Wang, Pei
AU - Wu, Dinghao
N1 - Publisher Copyright:
Copyright © 2023 by SCITEPRESS - Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)
PY - 2023
Y1 - 2023
N2 - The need for Deep Learning (DL) based services has rapidly increased in the past years. As part of the trend, the privatization of Deep Neural Network (DNN) models has become increasingly popular. The authors give customers or service providers direct access to their created models and let them deploy models on devices or infrastructure out of the control of the authors. Meanwhile, the emergence of DL Compilers makes it possible to compile a DNN model into a lightweight binary for faster inference, which is attractive to many stakeholders. However, distilling the essence of a model into a binary that is free to be examined by untrusted parties creates a chance to leak essential information. With only DNN binary library, it is possible to extract neural network architecture using reverse engineering. In this paper, we present LibSteal. This framework can leak DNN architecture information by reversing the binary library generated from the DL Compiler, which is similar to or even equivalent to the original. The evaluation shows that LibSteal can efficiently steal the architecture information of victim DNN models. After training the extracted models with the same hyper-parameter, we can achieve accuracy comparable to that of the original models.
AB - The need for Deep Learning (DL) based services has rapidly increased in the past years. As part of the trend, the privatization of Deep Neural Network (DNN) models has become increasingly popular. The authors give customers or service providers direct access to their created models and let them deploy models on devices or infrastructure out of the control of the authors. Meanwhile, the emergence of DL Compilers makes it possible to compile a DNN model into a lightweight binary for faster inference, which is attractive to many stakeholders. However, distilling the essence of a model into a binary that is free to be examined by untrusted parties creates a chance to leak essential information. With only DNN binary library, it is possible to extract neural network architecture using reverse engineering. In this paper, we present LibSteal. This framework can leak DNN architecture information by reversing the binary library generated from the DL Compiler, which is similar to or even equivalent to the original. The evaluation shows that LibSteal can efficiently steal the architecture information of victim DNN models. After training the extracted models with the same hyper-parameter, we can achieve accuracy comparable to that of the original models.
UR - http://www.scopus.com/inward/record.url?scp=85160518539&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85160518539&partnerID=8YFLogxK
U2 - 10.5220/0011754900003464
DO - 10.5220/0011754900003464
M3 - Conference contribution
AN - SCOPUS:85160518539
T3 - International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE - Proceedings
SP - 283
EP - 292
BT - Proceedings of the 18th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2023
A2 - Kaindl, Hermann
A2 - Kaindl, Hermann
A2 - Kaindl, Hermann
A2 - Mannion, Mike
A2 - Maciaszek, Leszek
A2 - Maciaszek, Leszek
PB - Science and Technology Publications, Lda
Y2 - 24 April 2023 through 25 April 2023
ER -