TY - GEN
T1 - QuaLITi
T2 - 38th International Conference on VLSI Design, VLSID 2025
AU - Phalak, Koustubh
AU - Ghosh, Swaroop
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Quantum Machine Learning (QML) is an accelerating field of study that leverages the principles of quantum mechanics to enhance and innovate within machine learning methodologies. However, Noisy Intermediate-Scale Quantum (NISQ) computers suffer from noise that corrupts the quantum states of the qubits and affects the training and inferencing accuracy. Furthermore, quantum computers have long access queues. A single execution with a pre-defined number of shots can take hours just to reach the top of the wait queue, which is especially disadvantageous to QML algorithms that are iterative in nature. Many vendors provide access to a suite of quantum hardware with varied qubit technologies, number of qubits, cou-pling architectures, and noise characteristics. However, present QML algorithms do not use them for the training procedure and often rely on local noiseless/noisy simulators due to cost and training timing overhead on real hardware. Taking these constraints into account, we perform a study to maximize the inferencing performance of QML workloads based on the choice of hardware selection. Specifically, we perform a detailed analysis of quantum classifiers (both training and inference through the lens of hardware queue wait times) on Iris and reduced Digits datasets under noise and varied conditions such as different hardware and coupling maps. We show that using multiple readily available hardware for training rather than relying on a single hardware, especially if it has a long queue depth of pending jobs, can lead to a performance impact of only 3-4 % while providing up to 45X reduction in training wait time.
AB - Quantum Machine Learning (QML) is an accelerating field of study that leverages the principles of quantum mechanics to enhance and innovate within machine learning methodologies. However, Noisy Intermediate-Scale Quantum (NISQ) computers suffer from noise that corrupts the quantum states of the qubits and affects the training and inferencing accuracy. Furthermore, quantum computers have long access queues. A single execution with a pre-defined number of shots can take hours just to reach the top of the wait queue, which is especially disadvantageous to QML algorithms that are iterative in nature. Many vendors provide access to a suite of quantum hardware with varied qubit technologies, number of qubits, cou-pling architectures, and noise characteristics. However, present QML algorithms do not use them for the training procedure and often rely on local noiseless/noisy simulators due to cost and training timing overhead on real hardware. Taking these constraints into account, we perform a study to maximize the inferencing performance of QML workloads based on the choice of hardware selection. Specifically, we perform a detailed analysis of quantum classifiers (both training and inference through the lens of hardware queue wait times) on Iris and reduced Digits datasets under noise and varied conditions such as different hardware and coupling maps. We show that using multiple readily available hardware for training rather than relying on a single hardware, especially if it has a long queue depth of pending jobs, can lead to a performance impact of only 3-4 % while providing up to 45X reduction in training wait time.
UR - https://www.scopus.com/pages/publications/105000198925
UR - https://www.scopus.com/pages/publications/105000198925#tab=citedBy
U2 - 10.1109/VLSID64188.2025.00064
DO - 10.1109/VLSID64188.2025.00064
M3 - Conference contribution
AN - SCOPUS:105000198925
T3 - Proceedings of the IEEE International Conference on VLSI Design
SP - 296
EP - 301
BT - Proceedings - 38th International Conference on VLSI Design, VLSID 2025 - held concurrently with 24th International Conference on Embedded Systems, ES 2025
PB - IEEE Computer Society
Y2 - 4 January 2025 through 8 January 2025
ER -