Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


This paper studies the theoretical underpinnings of machine learning of ergodic Itô diffusions. The objective is to understand the convergence properties of the invariant statistics when the underlying system of stochastic differential equations (SDEs) is empirically estimated with a supervised regression framework. Using the perturbation theory of ergodic Markov chains and the linear response theory, we deduce a linear dependence of the errors of one-point and two-point invariant statistics on the error in the learning of the drift and diffusion coefficients. More importantly, our study shows that the usual L2-norm characterization of the learning generalization error is insufficient for achieving this linear dependence result. We find that sufficient conditions for such a linear dependence result are through learning algorithms that produce a uniformly Lipschitz and consistent estimator in the hypothesis space that retains certain characteristics of the drift coefficients, such as the usual linear growth condition that guarantees the existence of solutions of the underlying SDEs. We examine these conditions on two well-understood learning algorithms: the kernel-based spectral regression method and the shallow random neural networks with the ReLU activation function.

Original languageEnglish (US)
Article number133022
JournalPhysica D: Nonlinear Phenomena
StatePublished - Dec 2021

All Science Journal Classification (ASJC) codes

  • Statistical and Nonlinear Physics
  • Mathematical Physics
  • Condensed Matter Physics
  • Applied Mathematics


Dive into the research topics of 'Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions'. Together they form a unique fingerprint.

Cite this