Lessons in neural network training: overfitting may be harder than expected

Steve Lawrence, C. Lee Giles, Ah Chung Tsoi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

209 Scopus citations

Abstract

For many reasons, neural networks have become very popular AI machine learning models. Two of the most important aspects of machine learning models are how well the model generalizes to unseen data, and how well the model scales with problem complexity. Using a controlled task with known optimal training error, we investigate the convergence of the backpropagation (BP) algorithm. We find that the optimal solution is typically not found. Furthermore, we observe that networks larger than might be expected can result in lower training and generalization error. This result is supported by another real world example. We further investigate the training behavior by analyzing the weights in trained networks (excess degrees of freedom are seen to do little harm and to aid convergence), and contrasting the interpolation characteristics of multi-layer perceptron neural networks (MLPs) and polynomial models (overfitting behavior is very different - the MLP is often biased towards smoother solutions). Finally, we analyze relevant theory outlining the reasons for significant practical differences. These results bring into question common beliefs about neural network training regarding convergence and optimal network size, suggest alternate guidelines for practical use (lower fear of excess degrees of freedom), and help to direct future work (e.g. methods for creation of more parsimonious solutions, importance of the MLP/BP bias and possibly worse performance of 'improved' training algorithms).

Original languageEnglish (US)
Title of host publicationProceedings of the National Conference on Artificial Intelligence
Editors Anon
PublisherAAAI
Pages540-545
Number of pages6
StatePublished - 1997
EventProceedings of the 1997 14th National Conference on Artificial Intelligence, AAAI 97 - Providence, RI, USA
Duration: Jul 27 1997Jul 31 1997

Other

OtherProceedings of the 1997 14th National Conference on Artificial Intelligence, AAAI 97
CityProvidence, RI, USA
Period7/27/977/31/97

All Science Journal Classification (ASJC) codes

  • Software

Fingerprint

Dive into the research topics of 'Lessons in neural network training: overfitting may be harder than expected'. Together they form a unique fingerprint.

Cite this