The Quantum Imitation Game: Reverse Engineering of Quantum Machine Learning Models

Archisman Ghosh, Swaroop Ghosh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Quantum Machine Learning (QML) is an amalgamation of quantum computing paradigms with machine learning models, providing significant prospects for solving complex problems. However, with the expansion of numerous third-party vendors in the Noisy Intermediate-Scale Quantum (NISQ) era of quantum computing, the security of QML models is of prime importance, particularly against reverse engineering, which could expose sensitive parameters and proprietary algorithms embedded within the models. We assume the untrusted third-party quantum cloud provider is an adversary having white-box access to the transpiled version of the user-designed trained QML model during inference. Although the adversary can steal and use the model without any modification, reverse engineering (RE) to extract the pre-transpiled copy of the QML circuit will enable re-transpilation and usage of the model for various hardware with completely different native gate sets and even different qubit technology. The information about the parameters (e.g., number of parameters, their placements, and optimized values) can allow further training of the QML model if the adversary plans to alter the QML model to tamper with the watermark and/or embed their own watermark or refine the model for other purposes. In this first effort to investigate the RE of QML circuits, we examine quantum classifiers by comparing the training accuracy of original and reverse-engineered models across various sizes (i.e., number of qubits and number of parametric layers) of Quantum Neural Networks (QNNs). We note that multi-qubit classifiers can be reverse-engineered under specific conditions with a mean error of order 10-2 in a reasonable time. We also propose adding dummy rotation gates in the QML model with fixed parameters to increase the RE overhead for defense. For instance, an addition of 2 dummy qubits and 2 layers increases the overhead by ∼ 1.76 times for a classifier with 2 qubits and 3 layers with a performance overhead of less than 9%. We note that RE is a very powerful attack model which warrants further efforts on defenses.

Original languageEnglish (US)
Title of host publicationASHES 2024 - Proceedings of the 2024 Workshop on Attacks and Solutions in Hardware Security, Co-Located with
Subtitle of host publicationCCS 2024
PublisherAssociation for Computing Machinery, Inc
Pages48-57
Number of pages10
ISBN (Electronic)9798400712357
DOIs
StatePublished - Nov 19 2024
Event2024 Workshop on Attacks and Solutions in Hardware Security, ASHES 2024 - Salt Lake City, United States
Duration: Oct 14 2024Oct 18 2024

Publication series

NameASHES 2024 - Proceedings of the 2024 Workshop on Attacks and Solutions in Hardware Security, Co-Located with: CCS 2024

Conference

Conference2024 Workshop on Attacks and Solutions in Hardware Security, ASHES 2024
Country/TerritoryUnited States
CitySalt Lake City
Period10/14/2410/18/24

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture
  • Electrical and Electronic Engineering
  • Safety, Risk, Reliability and Quality

Fingerprint

Dive into the research topics of 'The Quantum Imitation Game: Reverse Engineering of Quantum Machine Learning Models'. Together they form a unique fingerprint.

Cite this