TY - GEN
T1 - The Quantum Imitation Game
T2 - 2024 Workshop on Attacks and Solutions in Hardware Security, ASHES 2024
AU - Ghosh, Archisman
AU - Ghosh, Swaroop
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/11/19
Y1 - 2024/11/19
N2 - Quantum Machine Learning (QML) is an amalgamation of quantum computing paradigms with machine learning models, providing significant prospects for solving complex problems. However, with the expansion of numerous third-party vendors in the Noisy Intermediate-Scale Quantum (NISQ) era of quantum computing, the security of QML models is of prime importance, particularly against reverse engineering, which could expose sensitive parameters and proprietary algorithms embedded within the models. We assume the untrusted third-party quantum cloud provider is an adversary having white-box access to the transpiled version of the user-designed trained QML model during inference. Although the adversary can steal and use the model without any modification, reverse engineering (RE) to extract the pre-transpiled copy of the QML circuit will enable re-transpilation and usage of the model for various hardware with completely different native gate sets and even different qubit technology. The information about the parameters (e.g., number of parameters, their placements, and optimized values) can allow further training of the QML model if the adversary plans to alter the QML model to tamper with the watermark and/or embed their own watermark or refine the model for other purposes. In this first effort to investigate the RE of QML circuits, we examine quantum classifiers by comparing the training accuracy of original and reverse-engineered models across various sizes (i.e., number of qubits and number of parametric layers) of Quantum Neural Networks (QNNs). We note that multi-qubit classifiers can be reverse-engineered under specific conditions with a mean error of order 10-2 in a reasonable time. We also propose adding dummy rotation gates in the QML model with fixed parameters to increase the RE overhead for defense. For instance, an addition of 2 dummy qubits and 2 layers increases the overhead by ∼ 1.76 times for a classifier with 2 qubits and 3 layers with a performance overhead of less than 9%. We note that RE is a very powerful attack model which warrants further efforts on defenses.
AB - Quantum Machine Learning (QML) is an amalgamation of quantum computing paradigms with machine learning models, providing significant prospects for solving complex problems. However, with the expansion of numerous third-party vendors in the Noisy Intermediate-Scale Quantum (NISQ) era of quantum computing, the security of QML models is of prime importance, particularly against reverse engineering, which could expose sensitive parameters and proprietary algorithms embedded within the models. We assume the untrusted third-party quantum cloud provider is an adversary having white-box access to the transpiled version of the user-designed trained QML model during inference. Although the adversary can steal and use the model without any modification, reverse engineering (RE) to extract the pre-transpiled copy of the QML circuit will enable re-transpilation and usage of the model for various hardware with completely different native gate sets and even different qubit technology. The information about the parameters (e.g., number of parameters, their placements, and optimized values) can allow further training of the QML model if the adversary plans to alter the QML model to tamper with the watermark and/or embed their own watermark or refine the model for other purposes. In this first effort to investigate the RE of QML circuits, we examine quantum classifiers by comparing the training accuracy of original and reverse-engineered models across various sizes (i.e., number of qubits and number of parametric layers) of Quantum Neural Networks (QNNs). We note that multi-qubit classifiers can be reverse-engineered under specific conditions with a mean error of order 10-2 in a reasonable time. We also propose adding dummy rotation gates in the QML model with fixed parameters to increase the RE overhead for defense. For instance, an addition of 2 dummy qubits and 2 layers increases the overhead by ∼ 1.76 times for a classifier with 2 qubits and 3 layers with a performance overhead of less than 9%. We note that RE is a very powerful attack model which warrants further efforts on defenses.
UR - https://www.scopus.com/pages/publications/85214089631
UR - https://www.scopus.com/inward/citedby.url?scp=85214089631&partnerID=8YFLogxK
U2 - 10.1145/3689939.3695783
DO - 10.1145/3689939.3695783
M3 - Conference contribution
AN - SCOPUS:85214089631
T3 - ASHES 2024 - Proceedings of the 2024 Workshop on Attacks and Solutions in Hardware Security, Co-Located with: CCS 2024
SP - 48
EP - 57
BT - ASHES 2024 - Proceedings of the 2024 Workshop on Attacks and Solutions in Hardware Security, Co-Located with
PB - Association for Computing Machinery, Inc
Y2 - 14 October 2024 through 18 October 2024
ER -