Fast Stochastic MPC Implementation via Policy Learning

Martina Mammarella, Abdulelah Altamimi, Mohammadreza Chamanbaz, Fabrizio Dabbene, Constantino Lagoa

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Stochastic Model Predictive Control (MPC) gained popularity thanks to its capability of overcoming the conservativeness of robust approaches, at the expense of a higher computational demand. This represents a critical issue especially for sampling-based methods. In this letter we propose a policy learning MPC approach, which aims at reducing the cost of solving stochastic optimization problems. The presented scheme relies upon the use of neural networks for identifying a mapping between the current state of the system and the probabilistic constraints. This allows to reduce the sample complexity to be less than or equal to the dimension of the decision variable, significantly scaling down the computational burden of stochastic MPC approaches, while preserving the same probabilistic guarantees. The efficacy of the proposed policy-learning MPC is proved by means of a numerical example.

Original languageEnglish (US)
Pages (from-to)3020-3025
Number of pages6
JournalIEEE Control Systems Letters
Volume6
DOIs
StatePublished - 2022

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Fast Stochastic MPC Implementation via Policy Learning'. Together they form a unique fingerprint.

Cite this