TY - GEN
T1 - Improving Reliability of Quantum True Random Number Generator using Machine Learning
AU - Ash-Saki, Abdullah
AU - Alam, Mahabubul
AU - Ghosh, Swaroop
N1 - Funding Information:
This work was supported by SRC (2847.001), NSF (CNS-1722557, CCF-1718474, DGE-1723687 and DGE-1821766), and DARPA YFA (D15AP00089).
Publisher Copyright:
© 2020 IEEE.
PY - 2020/3
Y1 - 2020/3
N2 - Quantum computer (QC) can be used as a true random number generator (TRNG). However, various noise sources introduce a bias in the generated number which affects the randomness. In this work, we analyze the impact of noise sources e.g., gate error, decoherence, and readout error in QC-based TRNG by running a set of error calibration and quantum tomography experiments. We employ a hybrid quantum-classical gate parameter optimization routine to find an optimal gate parameter. The optimal parameter compensates for error-induced bias and improves the quality of the random number by exploiting even the worst quality qubits. However, searching the optimal parameter in a hybrid setup requires time-consuming iterations between classical and quantum machines. We propose a machine learning model to predict optimal quantum gate parameters based on the qubit error specifications. We validate our approach using experimental results from IBM's publicly accessible quantum computers and the NIST statistical test suite. The proposed method can correct bias in any worst-case qubit by up to 88.57% in real quantum hardware.
AB - Quantum computer (QC) can be used as a true random number generator (TRNG). However, various noise sources introduce a bias in the generated number which affects the randomness. In this work, we analyze the impact of noise sources e.g., gate error, decoherence, and readout error in QC-based TRNG by running a set of error calibration and quantum tomography experiments. We employ a hybrid quantum-classical gate parameter optimization routine to find an optimal gate parameter. The optimal parameter compensates for error-induced bias and improves the quality of the random number by exploiting even the worst quality qubits. However, searching the optimal parameter in a hybrid setup requires time-consuming iterations between classical and quantum machines. We propose a machine learning model to predict optimal quantum gate parameters based on the qubit error specifications. We validate our approach using experimental results from IBM's publicly accessible quantum computers and the NIST statistical test suite. The proposed method can correct bias in any worst-case qubit by up to 88.57% in real quantum hardware.
UR - http://www.scopus.com/inward/record.url?scp=85089946186&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089946186&partnerID=8YFLogxK
U2 - 10.1109/ISQED48828.2020.9137054
DO - 10.1109/ISQED48828.2020.9137054
M3 - Conference contribution
AN - SCOPUS:85089946186
T3 - Proceedings - International Symposium on Quality Electronic Design, ISQED
SP - 273
EP - 279
BT - Proceedings of the 21st International Symposium on Quality Electronic Design, ISQED 2020
PB - IEEE Computer Society
T2 - 21st International Symposium on Quality Electronic Design, ISQED 2020
Y2 - 25 March 2020 through 26 March 2020
ER -