A model of working memory for latent representations

Shekoofeh Hedayati, Ryan E. O’Donnell, Brad Wyble

Research output: Contribution to journalArticlepeer-review

23 Scopus citations

Abstract

We propose a mechanistic explanation of how working memories are built and reconstructed from the latent representations of visual knowledge. The proposed model features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that links latent space activities to tokenized representations. The simulation results revealed that new pictures of familiar types of items can be encoded and retrieved efficiently from higher levels of the visual hierarchy, whereas truly novel patterns are better stored using only early layers. Moreover, a given stimulus in working memory can have multiple codes, which allows representation of visual detail in addition to categorical information. Finally, we validated our model’s assumptions by testing a series of predictions against behavioural results obtained from working memory tasks. The model provides a demonstration of how visual knowledge yields compact visual representation for efficient memory encoding.

Original languageEnglish (US)
Pages (from-to)709-719
Number of pages11
JournalNature Human Behaviour
Volume6
Issue number5
DOIs
StatePublished - May 2022

All Science Journal Classification (ASJC) codes

  • Social Psychology
  • Experimental and Cognitive Psychology
  • Behavioral Neuroscience

Fingerprint

Dive into the research topics of 'A model of working memory for latent representations'. Together they form a unique fingerprint.

Cite this