QNet: A Scalable and Noise-Resilient Quantum Neural Network Architecture for Noisy Intermediate-Scale Quantum Computers

Mahabubul Alam, Swaroop Ghosh

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Quantum machine learning (QML) is promising for potential speedups and improvements in conventional machine learning (ML) tasks. Existing QML models that use deep parametric quantum circuits (PQC) suffer from a large accumulation of gate errors and decoherence. To circumvent this issue, we propose a new QML architecture called QNet. QNet consists of several small quantum neural networks (QNN). Each of these smaller QNN’s can be executed on small quantum computers that dominate the NISQ-era machines. By carefully choosing the size of these QNN’s, QNet can exploit arbitrary size quantum computers to solve supervised ML tasks of any scale. It also enables heterogeneous technology integration in a single QML application. Through empirical studies, we show the trainability and generalization of QNet and the impact of various configurable variables on its performance. We compare QNet performance against existing models and discuss potential issues and design considerations. In our study, we show 43% better accuracy on average over the existing models on noisy quantum hardware emulators. More importantly, QNet provides a blueprint to build noise-resilient QML models with a collection of small quantum neural networks with near-term noisy quantum devices.

Original languageEnglish (US)
Article number755139
JournalFrontiers in Physics
Volume9
DOIs
StatePublished - Jan 5 2022

All Science Journal Classification (ASJC) codes

  • Biophysics
  • Materials Science (miscellaneous)
  • Mathematical Physics
  • General Physics and Astronomy
  • Physical and Theoretical Chemistry

Fingerprint

Dive into the research topics of 'QNet: A Scalable and Noise-Resilient Quantum Neural Network Architecture for Noisy Intermediate-Scale Quantum Computers'. Together they form a unique fingerprint.

Cite this