Abstract
As humans we communicate important information through fine nuances in our facial expressions, but because conscious motor representations are noisy, we might not be able to report these fine movements. Here we measured the precision of the explicit metacognitive information that young adults have about their own facial expressions. Participants imitated pictures of themselves making facial expressions and triggered a camera to take a picture of them while doing so. They then rated how well they thought they imitated each expression. We defined metacognitive access to facial expressions as the relationship between objective performance (how well the two pictures matched) and subjective performance ratings. As a group, participants' metacognitive confidence ratings were only about four times less precise than their own similarity ratings. In turn, machine learning analyses revealed that participants' performance ratings were based on idiosyncratic subsets of features. We conclude that metacognitive access to one's own facial expressions is only partial.
Original language | English (US) |
---|---|
Article number | 105155 |
Journal | Cognition |
Volume | 225 |
DOIs | |
State | Published - Aug 2022 |
All Science Journal Classification (ASJC) codes
- Experimental and Cognitive Psychology
- Language and Linguistics
- Developmental and Educational Psychology
- Linguistics and Language
- Cognitive Neuroscience