Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping

Rich Caruana, Steve Lawrence, Lee Giles

Research output: Chapter in Book/Report/Conference proceedingConference contribution

535 Scopus citations

Abstract

The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity generalize well when trained with backprop and early stopping. Experiments suggest two reasons for this: 1) Overfitting can vary significantly in different regions of the model. Excess capacity allows better fit to regions of high non-linearity, and backprop often avoids overfitting the regions of low non-linearity. 2) Regardless of size, nets learn task subcomponents in similar sequence. Big nets pass through stages similar to those learned by smaller nets. Early stopping can stop training the large net when it generalizes comparably to a smaller net. We also show that conjugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linearity.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 13 - Proceedings of the 2000 Conference, NIPS 2000
PublisherNeural information processing systems foundation
ISBN (Print)0262122413, 9780262122412
StatePublished - 2001
Event14th Annual Neural Information Processing Systems Conference, NIPS 2000 - Denver, CO, United States
Duration: Nov 27 2000Dec 2 2000

Other

Other14th Annual Neural Information Processing Systems Conference, NIPS 2000
Country/TerritoryUnited States
CityDenver, CO
Period11/27/0012/2/00

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping'. Together they form a unique fingerprint.

Cite this