TY - JOUR

T1 - Pattern completion in symmetric threshold-linear networks

AU - Curto, Carina

AU - Morrison, Katherine

N1 - Funding Information:
C.C. was supported by NSF DMS-1225666/DMS-1537228, NSF DMS-1516881, and an Alfred P. Sloan Research Fellowship.
Publisher Copyright:
© 2016 Massachusetts Institute of Technology.

PY - 2016/12/1

Y1 - 2016/12/1

N2 - Threshold-linear networks are a common class of firing rate models that describe recurrent interactions among neurons. Unlike their linear counterparts, these networks generically possess multiple stable fixed points (steady states), making them viable candidates for memory encoding and retrieval. In this work, we characterize stable fixed points of general threshold-linear networks with constant external drive and discover constraints on the coexistence of fixed points involving different subsets of active neurons. In the case of symmetric networks, we prove the following antichain property: if a set of neurons τ is the support of a stable fixed point, then no proper subset or superset of τ can support a stable fixed point. Symmetric threshold-linear networks thus appear to be well suited for pattern completion, since the dynamics are guaranteed not to get stuck in a subset or superset of a stored pattern. We also show that for any graph G, we can construct a network whose stable fixed points correspond precisely to the maximal cliques of G. As an application, we design network decoders for place field codes and demonstrate their efficacy for error correction and pattern completion. The proofs of our main results build on the theory of permitted sets in threshold-linear networks, including recently developed connections to classical distance geometry.

AB - Threshold-linear networks are a common class of firing rate models that describe recurrent interactions among neurons. Unlike their linear counterparts, these networks generically possess multiple stable fixed points (steady states), making them viable candidates for memory encoding and retrieval. In this work, we characterize stable fixed points of general threshold-linear networks with constant external drive and discover constraints on the coexistence of fixed points involving different subsets of active neurons. In the case of symmetric networks, we prove the following antichain property: if a set of neurons τ is the support of a stable fixed point, then no proper subset or superset of τ can support a stable fixed point. Symmetric threshold-linear networks thus appear to be well suited for pattern completion, since the dynamics are guaranteed not to get stuck in a subset or superset of a stored pattern. We also show that for any graph G, we can construct a network whose stable fixed points correspond precisely to the maximal cliques of G. As an application, we design network decoders for place field codes and demonstrate their efficacy for error correction and pattern completion. The proofs of our main results build on the theory of permitted sets in threshold-linear networks, including recently developed connections to classical distance geometry.

UR - http://www.scopus.com/inward/record.url?scp=84997815888&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84997815888&partnerID=8YFLogxK

U2 - 10.1162/NECO_a_00869

DO - 10.1162/NECO_a_00869

M3 - Letter

C2 - 27391688

AN - SCOPUS:84997815888

SN - 0899-7667

VL - 28

SP - 2825

EP - 2852

JO - Neural computation

JF - Neural computation

IS - 12

ER -