Skip to main navigation Skip to search Skip to main content

LTL-Constrained Policy Optimization with Cycle Experience Replay

  • Ameesh Shah
  • , Cameron Voloshin
  • , Chenxi Yang
  • , Abhinav Verma
  • , Swarat Chaudhuri
  • , Sanjit A. Seshia

Research output: Contribution to journalArticlepeer-review

Abstract

Linear Temporal Logic (LTL) offers a precise means for constraining the behavior of reinforcement learning agents. However, in many settings where both satisfaction and optimality conditions are present, LTL is insufficient to capture both. Instead, LTL-constrained policy optimization, where the goal is to optimize a scalar reward under LTL constraints, is needed. This constrained optimization problem proves difficult in deep Reinforcement Learning (DRL) settings, where learned policies often ignore the LTL constraint due to the sparse nature of LTL satisfaction. To alleviate the sparsity issue, we introduce Cycle Experience Replay (CyclER), a novel reward shaping technique that exploits the underlying structure of the LTL constraint to guide a policy towards satisfaction by encouraging partial behaviors compliant with the constraint. We provide a theoretical guarantee that optimizing CyclER will achieve policies that satisfy the LTL constraint with near-optimal probability. We evaluate CyclER in three continuous control domains. Our experimental results show that optimizing CyclER in tandem with the existing scalar reward outperforms existing reward-shaping methods at finding performant LTL-satisfying policies.

Original languageEnglish (US)
Pages (from-to)1-27
Number of pages27
JournalTransactions on Machine Learning Research
Volume2025-March
StatePublished - 2025

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'LTL-Constrained Policy Optimization with Cycle Experience Replay'. Together they form a unique fingerprint.

Cite this