TY - JOUR
T1 - Perceptual learning of view-independence in visuo-haptic object representations
AU - Lacey, Simon
AU - Pappas, Marisa
AU - Kreps, Alexandra
AU - Lee, Kevin
AU - Sathian, K.
N1 - Funding Information:
Acknowledgments Support to KS from the National Eye Institute, National Science Foundation and the Veterans Administration is gratefully acknowledged.
PY - 2009/9
Y1 - 2009/9
N2 - We previously showed that cross-modal recognition of unfamiliar objects is view-independent, in contrast to view-dependence within-modally, in both vision and haptics. Does the view-independent, bisensory representation underlying cross-modal recognition arise from integration of unisensory, view-dependent representations or intermediate, unisensory but view-independent representations? Two psychophysical experiments sought to distinguish between these alternative models. In both experiments, participants began from baseline, within-modal, view-dependence for object recognition in both vision and haptics. The first experiment induced within-modal view-independence by perceptual learning, which was completely and symmetrically transferred cross-modally: visual view-independence acquired through visual learning also resulted in haptic view-independence and vice versa. In the second experiment, both visual and haptic view-dependence were transformed to view-independence by either haptic-visual or visual-haptic cross-modal learning. We conclude that cross-modal view-independence fits with a model in which unisensory view-dependent representations are directly integrated into a bisensory, view-independent representation, rather than via intermediate, unisensory, view-independent representations.
AB - We previously showed that cross-modal recognition of unfamiliar objects is view-independent, in contrast to view-dependence within-modally, in both vision and haptics. Does the view-independent, bisensory representation underlying cross-modal recognition arise from integration of unisensory, view-dependent representations or intermediate, unisensory but view-independent representations? Two psychophysical experiments sought to distinguish between these alternative models. In both experiments, participants began from baseline, within-modal, view-dependence for object recognition in both vision and haptics. The first experiment induced within-modal view-independence by perceptual learning, which was completely and symmetrically transferred cross-modally: visual view-independence acquired through visual learning also resulted in haptic view-independence and vice versa. In the second experiment, both visual and haptic view-dependence were transformed to view-independence by either haptic-visual or visual-haptic cross-modal learning. We conclude that cross-modal view-independence fits with a model in which unisensory view-dependent representations are directly integrated into a bisensory, view-independent representation, rather than via intermediate, unisensory, view-independent representations.
UR - http://www.scopus.com/inward/record.url?scp=69549091709&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=69549091709&partnerID=8YFLogxK
U2 - 10.1007/s00221-009-1856-8
DO - 10.1007/s00221-009-1856-8
M3 - Article
C2 - 19484467
AN - SCOPUS:69549091709
SN - 0014-4819
VL - 198
SP - 329
EP - 337
JO - Experimental Brain Research
JF - Experimental Brain Research
IS - 2-3
ER -