TY - GEN
T1 - Programmatically interpretable reinforcement learning
AU - Verma, Abhinav
AU - Murali, Vijayaraghavan
AU - Singh, Rishabh
AU - Kohli, Pushmeet
AU - Chaudhuri, Swarat
N1 - Publisher Copyright:
© 2018 by the Authors All rights reserved.
PY - 2018
Y1 - 2018
N2 - We present a reinforcement learning framework, called Programmatically Interpretable Reinforcement Learning (PlRL), that is designed to generate interpretable and verifiable agent policies. Unlike the popular Deep Reinforcement Learning (Drl) paradigm, which represents policies by neural networks, PlRL represents policies using a high-level, domain-specific programming language. Such programmatic policies have the benefits of being more easily interpreted than neural networks, and being amenable to verification by symbolic methods. We propose a new method, called Neurally Directed Program Search (Ndps), for solving the challenging nonsmooth optimization problem of finding a programmatic policy with maximal reward. Ndps works by first learning a neural policy network using Drl, and then performing a local search over programmatic policies that seeks to minimize a distance from this neural "oracle". We evaluate NDPS on the task of learning to drive a simulated car in the TORCS car-racing environment. We demonstrate that Ndps is able to discover human-readable policies that pass some significant performance bars. We also show that PlRL policies can have smoother trajectories, and can be more easily transferred to environments not encountered during training, than corresponding policies discovered by Drl.
AB - We present a reinforcement learning framework, called Programmatically Interpretable Reinforcement Learning (PlRL), that is designed to generate interpretable and verifiable agent policies. Unlike the popular Deep Reinforcement Learning (Drl) paradigm, which represents policies by neural networks, PlRL represents policies using a high-level, domain-specific programming language. Such programmatic policies have the benefits of being more easily interpreted than neural networks, and being amenable to verification by symbolic methods. We propose a new method, called Neurally Directed Program Search (Ndps), for solving the challenging nonsmooth optimization problem of finding a programmatic policy with maximal reward. Ndps works by first learning a neural policy network using Drl, and then performing a local search over programmatic policies that seeks to minimize a distance from this neural "oracle". We evaluate NDPS on the task of learning to drive a simulated car in the TORCS car-racing environment. We demonstrate that Ndps is able to discover human-readable policies that pass some significant performance bars. We also show that PlRL policies can have smoother trajectories, and can be more easily transferred to environments not encountered during training, than corresponding policies discovered by Drl.
UR - http://www.scopus.com/inward/record.url?scp=85057311754&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057311754&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85057311754
T3 - 35th International Conference on Machine Learning, ICML 2018
SP - 8024
EP - 8033
BT - 35th International Conference on Machine Learning, ICML 2018
A2 - Krause, Andreas
A2 - Dy, Jennifer
PB - International Machine Learning Society (IMLS)
T2 - 35th International Conference on Machine Learning, ICML 2018
Y2 - 10 July 2018 through 15 July 2018
ER -