Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer ReLU Neural Networks

Yahong Yang, Qipin Chen, Wenrui Hao

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we present a novel training approach called the Homotopy Relaxation Training Algorithm (HRTA), aimed at accelerating the training process in contrast to traditional methods. Our algorithm incorporates two key mechanisms: one involves building a homotopy activation function that seamlessly connects the linear activation function with the ReLU activation function; the other technique entails relaxing the homotopy parameter to enhance the training refinement process. We have conducted an in-depth analysis of this novel method within the context of the neural tangent kernel (NTK), revealing significantly improved convergence rates. Our experimental results, especially when considering networks with larger widths, validate the theoretical conclusions. This proposed HRTA exhibits the potential for other activation functions and deep neural networks.

Original languageEnglish (US)
Article number40
JournalJournal of Scientific Computing
Volume102
Issue number2
DOIs
StatePublished - Feb 2025

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Numerical Analysis
  • General Engineering
  • Computational Mathematics
  • Computational Theory and Mathematics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer ReLU Neural Networks'. Together they form a unique fingerprint.

Cite this