Learning Across Senses: Cross-Modal Effects in Multisensory Statistical Learning

Aaron D. Mitchel, Daniel J. Weiss

Research output: Contribution to journalArticlepeer-review

58 Scopus citations

Abstract

It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.

Original languageEnglish (US)
Pages (from-to)1081-1091
Number of pages11
JournalJournal of Experimental Psychology: Learning Memory and Cognition
Volume37
Issue number5
DOIs
StatePublished - Sep 2011

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Experimental and Cognitive Psychology
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Learning Across Senses: Cross-Modal Effects in Multisensory Statistical Learning'. Together they form a unique fingerprint.

Cite this