Quantum computer (QC) can be used as a true random number generator (TRNG). However, various noise sources introduce a bias in the generated number which affects the randomness. In this work, we analyze the impact of noise sources e.g., gate error, decoherence, and readout error in QC-based TRNG by running a set of error calibration and quantum tomography experiments. We employ a hybrid quantum-classical gate parameter optimization routine to find an optimal gate parameter. The optimal parameter compensates for error-induced bias and improves the quality of the random number by exploiting even the worst quality qubits. However, searching the optimal parameter in a hybrid setup requires time-consuming iterations between classical and quantum machines. We propose a machine learning model to predict optimal quantum gate parameters based on the qubit error specifications. We validate our approach using experimental results from IBM's publicly accessible quantum computers and the NIST statistical test suite. The proposed method can correct bias in any worst-case qubit by up to 88.57% in real quantum hardware.