Evaluating Efficacy of Model Stealing Attacks and Defenses on Quantum Neural Networks

Satwik Kundu, Debarshi Kundu, Swaroop Ghosh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Cloud hosting of quantum machine learning (QML) models exposes them to a range of vulnerabilities, the most significant of which is the model stealing attack. In this study, we assess the efficacy of such attacks in the realm of quantum computing. Our findings revealed that model stealing attacks can produce clone models achieving up to 0.9 × and 0.99 × clone test accuracy when trained using Top-1 and Top-k labels, respectively (k: num_classes). To defend against these attacks, we propose: 1) hardware variation-induced perturbation (HVIP) and 2) hardware and architecture variation-induced perturbation (HAVIP). Despite limited success with our defense techniques, it has led to an important discovery: QML models trained on noisy hardwares are naturally resistant to perturbation or obfuscation-based defenses or attacks.

Original languageEnglish (US)
Title of host publicationGLSVLSI 2024 - Proceedings of the Great Lakes Symposium on VLSI 2024
PublisherAssociation for Computing Machinery
Pages556-559
Number of pages4
ISBN (Electronic)9798400706059
DOIs
StatePublished - Jun 12 2024
Event34th Great Lakes Symposium on VLSI 2024, GLSVLSI 2024 - Clearwater, United States
Duration: Jun 12 2024Jun 14 2024

Publication series

NameProceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI

Conference

Conference34th Great Lakes Symposium on VLSI 2024, GLSVLSI 2024
Country/TerritoryUnited States
CityClearwater
Period6/12/246/14/24

All Science Journal Classification (ASJC) codes

  • General Engineering

Fingerprint

Dive into the research topics of 'Evaluating Efficacy of Model Stealing Attacks and Defenses on Quantum Neural Networks'. Together they form a unique fingerprint.

Cite this