Federated reinforcement learning for generalizable motion planning

Zhenyuan Yuan, Siyuan Xu, Minghui Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

This paper considers the problem of learning a control policy that generalize well to novel environments given a set of sample environments. We develop a federated learning framework that enables collaborative learning of multiple learners and a centralized server without sharing their raw data. In each iteration, each learner uploads its local control policy and the corresponding estimated normalized arrival time to the server, which then computes the global optimum among the learners and broadcasts the optimal policy to the learners. Each learner then selects between its local control policy and that from the server for next iteration. By leveraging generalization error, our analysis shows that the proposed framework is able to provide generalization guarantees on arrival time and safety as well as consensus at global optimal value in the limiting case. Monte Carlo simulation is conducted for evaluation.

Original languageEnglish (US)
Title of host publication2023 American Control Conference, ACC 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages78-83
Number of pages6
ISBN (Electronic)9798350328066
DOIs
StatePublished - 2023
Event2023 American Control Conference, ACC 2023 - San Diego, United States
Duration: May 31 2023Jun 2 2023

Publication series

NameProceedings of the American Control Conference
Volume2023-May
ISSN (Print)0743-1619

Conference

Conference2023 American Control Conference, ACC 2023
Country/TerritoryUnited States
CitySan Diego
Period5/31/236/2/23

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Federated reinforcement learning for generalizable motion planning'. Together they form a unique fingerprint.

Cite this