Stable fixed points of combinatorial threshold-linear networks

Carina Curto, Jesse Geneson, Katherine Morrison

Research output: Contribution to journalArticlepeer-review

Abstract

Combinatorial threshold-linear networks (CTLNs) are a special class of recurrent neural networks whose dynamics are tightly controlled by an underlying directed graph. Recurrent networks have long been used as models for associative memory and pattern completion, with stable fixed points playing the role of stored memory patterns in the network. In prior work, we showed that target-free cliques of the graph correspond to stable fixed points of the dynamics, and we conjectured that these are the only stable fixed points possible [19,8]. In this paper, we prove that the conjecture holds in a variety of special cases, including for networks with very strong inhibition and graphs of size n≤4. We also provide further evidence for the conjecture by showing that sparse graphs and graphs that are nearly cliques can never support stable fixed points. Finally, we translate some results from extremal combinatorics to obtain an upper bound on the number of stable fixed points of CTLNs in cases where the conjecture holds.

Original languageEnglish (US)
Article number102652
JournalAdvances in Applied Mathematics
Volume154
DOIs
StatePublished - Mar 2024

All Science Journal Classification (ASJC) codes

  • Applied Mathematics

Cite this