Abstract

In this paper we propose and evaluate a heuristically-motivated method for adaptive modification of the learning rate in backpropagation (which is perhaps the most widely used neural network learning algorithm) that does not require the estimation of higher-order derivatives. We present a modified version of the backpropagation learning algorithm which uses a simple heuristic to come up with a learning parameter value at each epoch. We present numerous simulations on real-world data sets using our modified algorithm. We compare these results with results obtained with a standard back-propagation learning algorithm, and also various modifications of the standard backpropagation algorithm (e.g., flat-spot elimination methods) that have been discussed in the literature. Our simulation results suggest that the adaptive learning rate modification helps substantially to speed up the convergence of the backpropagation algorithm. Furthermore, it makes the initial choice of the learning rate fairly unimportant as our method allows the learning rate to change and settle at a reasonable value for the specific problem.

Original languageEnglish (US)
Pages (from-to)89-95
Number of pages7
JournalMicrocomputer Applications
Volume18
Issue number3
StatePublished - Dec 1 1999

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'Adaptive learning rate selection for backpropagation networks'. Together they form a unique fingerprint.

Cite this