Researchers have conduct adversarial attacks against deep neural networks (DNNs) for health risk prediction in the white/gray-box setting to evaluate their robustness. However, since most real-world solutions are trained by private data and released as black-box services on the cloud, we should investigate their robustness in the black-box setting. Unfortunately, existing work ignores to consider the uniqueness of electronic health records (EHRs). To fill this gap, we propose the first black-box adversarial attack method against health risk prediction models named MedAttacker to investigate their vulnerability. It addresses the challenges brought by EHRs via two steps: hierarchical position selection which selects the attacked positions in a reinforcement learning (RL) framework and substitute selection which identifies substitutes with a score-based principle. Particularly, by considering the temporal context inside EHRs, MedAttacker initializes its RL position selection policy by using the contribution score of each visit and the saliency score of each code, which can be well integrated with the deterministic substitute selection process decided by the score changes. We evaluate MedAttacker by attacking three advanced risk prediction models in the black-box setting across multiple real-world datasets, and MedAttacker consistently achieves the highest average success rate and even outperforms a recent white-box EHR adversarial attack technique in certain cases.