Abstract
Despite the promising performance of deep reinforcement learning (DRL) agents in many challenging scenarios, the black-box nature of these agents greatly limits their applications in critical domains.Prior research has proposed several explanation techniques to understand the deep learning-based policies in RL.Most existing methods explain why an agent takes individual actions rather than pinpointing the critical steps to its final reward.To fill this gap, we propose StateMask, a novel method to identify the states most critical to the agent's final reward.The high-level idea of StateMask is to learn a mask net that blinds a target agent and forces it to take random actions at some steps without compromising the agent's performance.Through careful design, we can theoretically ensure that the masked agent performs similarly to the original agent.We evaluate StateMask in various popular RL environments and show its superiority over existing explainers in explanation fidelity.We also show that StateMask has better utilities, such as launching adversarial attacks and patching policy errors.
Original language | English (US) |
---|---|
Journal | Advances in Neural Information Processing Systems |
Volume | 36 |
State | Published - 2023 |
Event | 37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States Duration: Dec 10 2023 → Dec 16 2023 |
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Information Systems
- Signal Processing