Visual speech segmentation: Using facial cues to locate word boundaries in continuous speech

Aaron D. Mitchel, Daniel J. Weiss

Research output: Contribution to journalArticlepeer-review

29 Scopus citations


Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.

Original languageEnglish (US)
Pages (from-to)771-780
Number of pages10
JournalLanguage, Cognition and Neuroscience
Issue number7
StatePublished - May 3 2013

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Experimental and Cognitive Psychology
  • Linguistics and Language
  • Cognitive Neuroscience


Dive into the research topics of 'Visual speech segmentation: Using facial cues to locate word boundaries in continuous speech'. Together they form a unique fingerprint.

Cite this