TY - JOUR
T1 - A Novel Model-Free Deep Reinforcement Learning Framework for Energy Management of a PV Integrated Energy Hub
AU - Dolatabadi, Amirhossein
AU - Abdeltawab, Hussein
AU - Mohamed, Yasser Abdel Rady I.
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2023/9/1
Y1 - 2023/9/1
N2 - This paper utilizes a fully model-free and data-driven deep reinforcement learning (DRL) framework to develop an intelligent controller that can exploit information to optimally schedule the energy hub with the aim of minimizing energy costs and emissions. By posing the energy hub scheduling problem as a multi-dimensional continuous state and action space, the proposed deep deterministic policy gradient (DDPG) method enables more cost-effective control strategies. The method can lead to a more efficient operation by considering nonlinear physical characteristics of the energy hub components like nonconvex feasible operating regions of combined heat and power (CHP) units, valve-point effects of power-only units, and fuel cell dynamic efficiency. Moreover, to provide great potential for the DDPG agent to learn an optimal policy in an efficient way, a hybrid forecasting model based on convolutional neural networks (CNNs) and bidirectional long short-term memories (BLSTMs) is developed to overcome the risk associated with PV power generation that can be highly intermittent, particularly on cloudy days. The effectiveness and applicability of the proposed scheduling framework in reducing energy costs and emissions while coping with uncertainties are demonstrated by comparing it against conventional robust optimization and stochastic programming approaches as well as state-of-the-art DRL methods in different case studies.
AB - This paper utilizes a fully model-free and data-driven deep reinforcement learning (DRL) framework to develop an intelligent controller that can exploit information to optimally schedule the energy hub with the aim of minimizing energy costs and emissions. By posing the energy hub scheduling problem as a multi-dimensional continuous state and action space, the proposed deep deterministic policy gradient (DDPG) method enables more cost-effective control strategies. The method can lead to a more efficient operation by considering nonlinear physical characteristics of the energy hub components like nonconvex feasible operating regions of combined heat and power (CHP) units, valve-point effects of power-only units, and fuel cell dynamic efficiency. Moreover, to provide great potential for the DDPG agent to learn an optimal policy in an efficient way, a hybrid forecasting model based on convolutional neural networks (CNNs) and bidirectional long short-term memories (BLSTMs) is developed to overcome the risk associated with PV power generation that can be highly intermittent, particularly on cloudy days. The effectiveness and applicability of the proposed scheduling framework in reducing energy costs and emissions while coping with uncertainties are demonstrated by comparing it against conventional robust optimization and stochastic programming approaches as well as state-of-the-art DRL methods in different case studies.
UR - http://www.scopus.com/inward/record.url?scp=85139847709&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139847709&partnerID=8YFLogxK
U2 - 10.1109/TPWRS.2022.3212938
DO - 10.1109/TPWRS.2022.3212938
M3 - Article
AN - SCOPUS:85139847709
SN - 0885-8950
VL - 38
SP - 4840
EP - 4852
JO - IEEE Transactions on Power Systems
JF - IEEE Transactions on Power Systems
IS - 5
ER -