TY - GEN
T1 - Offline Reinforcement Learning for Wireless Network Optimization with Mixture Datasets
AU - Yang, Kun
AU - Shen, Cong
AU - Yang, Jing
AU - Yeh, Shu Ping
AU - Sydir, Jerry
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The recent development of reinforcement learning (RL) has boosted the adoption of online RL for wireless radio resource management (RRM). However, online RL algorithms require direct interactions with the environment, which may be undesirable given the potential performance loss due to the unavoidable exploration in RL. In this work, we first investigate the use of offline RL algorithms in solving the RRM problem. We evaluate several state-of-the-art offline RL algorithms, including behavior constrained Q-Iearning (BCQ), conservative Q-learning (CQL), and implicit Q-learning (IQL), for a specific RRM problem that aims at maximizing a linear combination of sum and 5-percentile rates via user scheduling. We observe that the performance of offline RL for the RRM problem depends critically on the behavior policy used for data collection, and further propose a novel offline RL solution that leverages heterogeneous datasets collected by different behavior policies. We show that with a proper mixture of the datasets, offline RL can produce a near-optimal RL policy even when all involved behavior policies are highly suboptimal.
AB - The recent development of reinforcement learning (RL) has boosted the adoption of online RL for wireless radio resource management (RRM). However, online RL algorithms require direct interactions with the environment, which may be undesirable given the potential performance loss due to the unavoidable exploration in RL. In this work, we first investigate the use of offline RL algorithms in solving the RRM problem. We evaluate several state-of-the-art offline RL algorithms, including behavior constrained Q-Iearning (BCQ), conservative Q-learning (CQL), and implicit Q-learning (IQL), for a specific RRM problem that aims at maximizing a linear combination of sum and 5-percentile rates via user scheduling. We observe that the performance of offline RL for the RRM problem depends critically on the behavior policy used for data collection, and further propose a novel offline RL solution that leverages heterogeneous datasets collected by different behavior policies. We show that with a proper mixture of the datasets, offline RL can produce a near-optimal RL policy even when all involved behavior policies are highly suboptimal.
UR - http://www.scopus.com/inward/record.url?scp=85181393599&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85181393599&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF59524.2023.10477008
DO - 10.1109/IEEECONF59524.2023.10477008
M3 - Conference contribution
AN - SCOPUS:85181393599
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 629
EP - 633
BT - Conference Record of the 57th Asilomar Conference on Signals, Systems and Computers, ACSSC 2023
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 57th Asilomar Conference on Signals, Systems and Computers, ACSSC 2023
Y2 - 29 October 2023 through 1 November 2023
ER -