TY - JOUR
T1 - Deep Reinforcement Learning-Based Control for Real-Time Hybrid Simulation of Civil Structures
AU - Felipe Niño, Andrés
AU - Palacio-Betancur, Alejandro
AU - Miranda-Chiquito, Piedad
AU - David Amaya, Juan
AU - Silva, Christian E.
AU - Gutierrez Soto, Mariantonieta
AU - Felipe Giraldo, Luis
N1 - Publisher Copyright:
© 2025 The Author(s). International Journal of Robust and Nonlinear Control published by John Wiley & Sons Ltd.
PY - 2025
Y1 - 2025
N2 - Real-time Hybrid Simulation (RTHS) is a cyber-physical technique that studies the dynamic behavior of a system by combining physical and numerical components that are coupled through a boundary condition enforcer. In structural engineering, the numerical components are subjected to environmental loads that become dynamic displacements of the physical substructure applied through an actuator. However, the dynamics of the coupling between components and the complexities of the system lead to synchronization challenges that affect the accuracy of the simulation. Thus, requiring tracking controllers to ensure the fidelity of the simulation. This paper studies deep reinforcement learning (DRL) as a novel alternative to designing the tracking controller. Three controllers are designed: a DRL agent combined with a conventional time delay compensation, a conventional feedback controller combined with a DRL agent, and a DRL agent with a complex neural network architecture. The proposed approaches are tested using a virtual RTHS benchmark problem, and the results are compared with an optimized controller that has a proportional-integral-derivative controller and phase-lead compensation. The results show that DRL can address the synchronization challenges of RTHS with a model-free approach and simple neural network architectures. The work shown in this study is a critical step toward model-free control methodologies that can transform and further develop the RTHS method. The proposed methodology can be used to address important challenges related to RTHS, including nonlinearities and uncertainties of the physical substructure, complex boundary conditions, and the computational efficiency when physical structures with complex dynamics are present.
AB - Real-time Hybrid Simulation (RTHS) is a cyber-physical technique that studies the dynamic behavior of a system by combining physical and numerical components that are coupled through a boundary condition enforcer. In structural engineering, the numerical components are subjected to environmental loads that become dynamic displacements of the physical substructure applied through an actuator. However, the dynamics of the coupling between components and the complexities of the system lead to synchronization challenges that affect the accuracy of the simulation. Thus, requiring tracking controllers to ensure the fidelity of the simulation. This paper studies deep reinforcement learning (DRL) as a novel alternative to designing the tracking controller. Three controllers are designed: a DRL agent combined with a conventional time delay compensation, a conventional feedback controller combined with a DRL agent, and a DRL agent with a complex neural network architecture. The proposed approaches are tested using a virtual RTHS benchmark problem, and the results are compared with an optimized controller that has a proportional-integral-derivative controller and phase-lead compensation. The results show that DRL can address the synchronization challenges of RTHS with a model-free approach and simple neural network architectures. The work shown in this study is a critical step toward model-free control methodologies that can transform and further develop the RTHS method. The proposed methodology can be used to address important challenges related to RTHS, including nonlinearities and uncertainties of the physical substructure, complex boundary conditions, and the computational efficiency when physical structures with complex dynamics are present.
UR - http://www.scopus.com/inward/record.url?scp=85216259272&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85216259272&partnerID=8YFLogxK
U2 - 10.1002/rnc.7824
DO - 10.1002/rnc.7824
M3 - Article
AN - SCOPUS:85216259272
SN - 1049-8923
JO - International Journal of Robust and Nonlinear Control
JF - International Journal of Robust and Nonlinear Control
ER -