TY - GEN
T1 - Quantum Data Breach
T2 - 26th International Symposium on Quality Electronic Design, ISQED 2025
AU - Upadhyay, Suryansh
AU - Ghosh, Swaroop
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Quantum computing (QC) has the potential to rev-olutionize fields like machine learning, security, and healthcare. Quantum machine learning (QML) has emerged as a promising area, enhancing learning algorithms using quantum computers. However, QML models are lucrative targets due to their high training costs and extensive training times. The scarcity of quantum resources and long wait times further exacerbate the challenge. Additionally, QML providers may rely on a third-party quantum cloud for hosting the model, exposing the models and training data. As QML-as-a-Service (QMLaaS) becomes more prevalent, reliance on third party quantum clouds can pose a significant threat. This paper shows that adversaries in quantum clouds can use white-box access of the QML model during training to extract the state preparation circuit (containing training data) along with the labels. The extracted training data can be reused for training a clone model or sold for profit. We propose a suite of techniques to prune and fix the incorrect labels. Results show that ≈90% labels can be extracted correctly. The same model trained on the adversarially extracted data achieves approximately ≈90% accuracy, closely matching the accuracy achieved when trained on the original data. To mitigate this threat, we propose masking labels/classes and modifying the cost function for label obfuscation, reducing adversarial label prediction accuracy by ≈70%.
AB - Quantum computing (QC) has the potential to rev-olutionize fields like machine learning, security, and healthcare. Quantum machine learning (QML) has emerged as a promising area, enhancing learning algorithms using quantum computers. However, QML models are lucrative targets due to their high training costs and extensive training times. The scarcity of quantum resources and long wait times further exacerbate the challenge. Additionally, QML providers may rely on a third-party quantum cloud for hosting the model, exposing the models and training data. As QML-as-a-Service (QMLaaS) becomes more prevalent, reliance on third party quantum clouds can pose a significant threat. This paper shows that adversaries in quantum clouds can use white-box access of the QML model during training to extract the state preparation circuit (containing training data) along with the labels. The extracted training data can be reused for training a clone model or sold for profit. We propose a suite of techniques to prune and fix the incorrect labels. Results show that ≈90% labels can be extracted correctly. The same model trained on the adversarially extracted data achieves approximately ≈90% accuracy, closely matching the accuracy achieved when trained on the original data. To mitigate this threat, we propose masking labels/classes and modifying the cost function for label obfuscation, reducing adversarial label prediction accuracy by ≈70%.
UR - https://www.scopus.com/pages/publications/105007521963
UR - https://www.scopus.com/inward/citedby.url?scp=105007521963&partnerID=8YFLogxK
U2 - 10.1109/ISQED65160.2025.11014467
DO - 10.1109/ISQED65160.2025.11014467
M3 - Conference contribution
AN - SCOPUS:105007521963
T3 - Proceedings - International Symposium on Quality Electronic Design, ISQED
BT - Proceedings of the 26th International Symposium on Quality Electronic Design, ISQED 2025
PB - IEEE Computer Society
Y2 - 23 April 2025 through 25 April 2025
ER -