Abstract
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.
Original language | English (US) |
---|---|
Pages (from-to) | 1081-1091 |
Number of pages | 11 |
Journal | Journal of Experimental Psychology: Learning Memory and Cognition |
Volume | 37 |
Issue number | 5 |
DOIs | |
State | Published - Sep 2011 |
All Science Journal Classification (ASJC) codes
- Language and Linguistics
- Experimental and Cognitive Psychology
- Linguistics and Language