TY - JOUR
T1 - Distributed deep reinforcement learning for simulation control
AU - Pawar, Suraj
AU - Maulik, Romit
N1 - Publisher Copyright:
© 2021 IOP Publishing Ltd.
PY - 2021/6
Y1 - 2021/6
N2 - Several applications in the scientific simulation of physical systems can be formulated as control/optimization problems. The computational models for such systems generally contain hyperparameters, which control solution fidelity and computational expense. The tuning of these parameters is non-Trivial and the general approach is to manually spot-check for good combinations. This is because optimal hyperparameter configuration search becomes intractable when the parameter space is large and when they may vary dynamically. To address this issue, we present a framework based on deep reinforcement learning (RL) to train a deep neural network agent that controls a model solve by varying parameters dynamically. First, we validate our RL framework for the problem of controlling chaos in chaotic systems by dynamically changing the parameters of the system. Subsequently, we illustrate the capabilities of our framework for accelerating the convergence of a steady-state computational fluid dynamics solver by automatically adjusting the relaxation factors of the discretized Navier Stokes equations during run-Time. The results indicate that the run-Time control of the relaxation factors by the learned policy leads to a significant reduction in the number of iterations for convergence compared to the random selection of the relaxation factors. Our results point to potential benefits for learning adaptive hyperparameter learning strategies across different geometries and boundary conditions with implications for reduced computational campaign expenses.
AB - Several applications in the scientific simulation of physical systems can be formulated as control/optimization problems. The computational models for such systems generally contain hyperparameters, which control solution fidelity and computational expense. The tuning of these parameters is non-Trivial and the general approach is to manually spot-check for good combinations. This is because optimal hyperparameter configuration search becomes intractable when the parameter space is large and when they may vary dynamically. To address this issue, we present a framework based on deep reinforcement learning (RL) to train a deep neural network agent that controls a model solve by varying parameters dynamically. First, we validate our RL framework for the problem of controlling chaos in chaotic systems by dynamically changing the parameters of the system. Subsequently, we illustrate the capabilities of our framework for accelerating the convergence of a steady-state computational fluid dynamics solver by automatically adjusting the relaxation factors of the discretized Navier Stokes equations during run-Time. The results indicate that the run-Time control of the relaxation factors by the learned policy leads to a significant reduction in the number of iterations for convergence compared to the random selection of the relaxation factors. Our results point to potential benefits for learning adaptive hyperparameter learning strategies across different geometries and boundary conditions with implications for reduced computational campaign expenses.
UR - http://www.scopus.com/inward/record.url?scp=85104105105&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85104105105&partnerID=8YFLogxK
U2 - 10.1088/2632-2153/abdaf8
DO - 10.1088/2632-2153/abdaf8
M3 - Article
AN - SCOPUS:85104105105
SN - 2632-2153
VL - 2
JO - Machine Learning: Science and Technology
JF - Machine Learning: Science and Technology
IS - 2
M1 - 025029
ER -