TY - GEN
T1 - Using q-learning and genetic algorithms to improve the efficiency of weight adjustments for optimal control and design problems
AU - Kamali, Kaivan
AU - Jiang, Lijun
AU - Yen, John
AU - Wang, K. W.
N1 - Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2005
Y1 - 2005
N2 - In traditional optimal control and design problems, the control gains and design parameters are usually derived to minimize a cost function reflecting the system performance and control effort. One major challenge of such approaches is the selection of weighting matrices in the cost function, which are usually determined via trial and error and human intuition. While various techniques have been proposed to automate the weight selection process, they either can not address complex design problems or suffer from slow convergence rate and high computational costs. We propose a layered approach based on Q-learning, a reinforcement learning technique, on top of genetic algorithms (GA) to determine the best weightings for optimal control and design problems. The layered approach allows for reuse of knowledge. Knowledge obtained via Q-learning in a design problem can be used to speed up the convergence rate of a similar design problem. Moreover, the layered approach allows for solving optimizations that cannot be solved by GA alone. To test the proposed method, we perform numerical experiments on a sample active-passive hybrid vibration control problem, namely adaptive structures with active-passive hybrid piezoelectric networks (APPN). These numerical experiments show that the proposed Q-learning scheme is a promising approach for.
AB - In traditional optimal control and design problems, the control gains and design parameters are usually derived to minimize a cost function reflecting the system performance and control effort. One major challenge of such approaches is the selection of weighting matrices in the cost function, which are usually determined via trial and error and human intuition. While various techniques have been proposed to automate the weight selection process, they either can not address complex design problems or suffer from slow convergence rate and high computational costs. We propose a layered approach based on Q-learning, a reinforcement learning technique, on top of genetic algorithms (GA) to determine the best weightings for optimal control and design problems. The layered approach allows for reuse of knowledge. Knowledge obtained via Q-learning in a design problem can be used to speed up the convergence rate of a similar design problem. Moreover, the layered approach allows for solving optimizations that cannot be solved by GA alone. To test the proposed method, we perform numerical experiments on a sample active-passive hybrid vibration control problem, namely adaptive structures with active-passive hybrid piezoelectric networks (APPN). These numerical experiments show that the proposed Q-learning scheme is a promising approach for.
UR - http://www.scopus.com/inward/record.url?scp=33144463894&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33144463894&partnerID=8YFLogxK
U2 - 10.1115/detc2005-85303
DO - 10.1115/detc2005-85303
M3 - Conference contribution
AN - SCOPUS:33144463894
SN - 079184739X
SN - 9780791847398
T3 - Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference - DETC2005
SP - 43
EP - 50
BT - Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conferences - DETC2005
PB - American Society of Mechanical Engineers
T2 - DETC2005: ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
Y2 - 24 September 2005 through 28 September 2005
ER -