StateMask: Explaining Deep Reinforcement Learning through State Mask

Zelei Cheng, Xian Wu, Jiahao Yu, Wenhai Sun, Wenbo Guo, Xinyu Xing

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Despite the promising performance of deep reinforcement learning (DRL) agents in many challenging scenarios, the black-box nature of these agents greatly limits their applications in critical domains.Prior research has proposed several explanation techniques to understand the deep learning-based policies in RL.Most existing methods explain why an agent takes individual actions rather than pinpointing the critical steps to its final reward.To fill this gap, we propose StateMask, a novel method to identify the states most critical to the agent's final reward.The high-level idea of StateMask is to learn a mask net that blinds a target agent and forces it to take random actions at some steps without compromising the agent's performance.Through careful design, we can theoretically ensure that the masked agent performs similarly to the original agent.We evaluate StateMask in various popular RL environments and show its superiority over existing explainers in explanation fidelity.We also show that StateMask has better utilities, such as launching adversarial attacks and patching policy errors.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: Dec 10 2023Dec 16 2023

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this