Stability for the training of deep neural networks and other classifiers

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


We examine the stability of loss-minimizing training processes that are used for deep neural networks (DNN) and other classifiers. While a classifier is optimized during training through a so-called loss function, the performance of classifiers is usually evaluated by some measure of accuracy, such as the overall accuracy which quantifies the proportion of objects that are well classified. This leads to the guiding question of stability: does decreasing loss through training always result in increased accuracy? We formalize the notion of stability, and provide examples of instability. Our main result consists of two novel conditions on the classifier which, if either is satisfied, ensure stability of training, that is we derive tight bounds on accuracy as loss decreases. We also derive a sufficient condition for stability on the training set alone, identifying flat portions of the data manifold as potential sources of instability. The latter condition is explicitly verifiable on the training dataset. Our results do not depend on the algorithm used for training, as long as loss decreases with training.

Original languageEnglish (US)
Pages (from-to)2345-2390
Number of pages46
JournalMathematical Models and Methods in Applied Sciences
Issue number11
StatePublished - Oct 1 2021

All Science Journal Classification (ASJC) codes

  • Modeling and Simulation
  • Applied Mathematics


Dive into the research topics of 'Stability for the training of deep neural networks and other classifiers'. Together they form a unique fingerprint.

Cite this