Abstract
This paper aims to develop a new training strategy to improve efficiency in estimation of weights and biases in a feedforward neural network (FNN). We propose a local linear approximation (LLA) algorithm, which approximates ReLU with a linear function at the neuron level and estimate the weights and biases of one-hidden-layer neural network iteratively. We further propose the layer-wise optimized adaptive neural network (LOAN), in which we use the LLA to estimate the weights and biases in the LOAN layer by layer adaptively. We compare the performance of the LLA with the commonly-used procedures in machine learning based on seven benchmark data sets. The numerical comparison implies that the proposed algorithm may outperform the existing procedures in terms of both training time and prediction accuracy.
Original language | English (US) |
---|---|
Article number | 494 |
Journal | Mathematics |
Volume | 10 |
Issue number | 3 |
DOIs | |
State | Published - Feb 1 2022 |
All Science Journal Classification (ASJC) codes
- Computer Science (miscellaneous)
- General Mathematics
- Engineering (miscellaneous)