TY - JOUR
T1 - How Language Processing can Shape a Common Model of Cognition
AU - Kelly, Matthew A.
AU - Reitter, David
N1 - Funding Information:
The authors gratefully acknowledge funding from NSF grants SES-1528409 and BCS-1734304.
Publisher Copyright:
© 2018 The Authors. Published by Elsevier B.V.
PY - 2018
Y1 - 2018
N2 - What role does the study of natural language play in the task of developing a unified theory and common model of cognition? Language is perhaps the most complex behaviour that humans exhibit, and, as such, is one of the most difficult problems for understanding human cognition. Linguistic theory can both inform and be informed by unified models of cognition. We discuss (1) how computational models of human cognition can provide insight into how humans produce and comprehend language and (2) how the problem of modelling language processing raises questions and creates challenges for widely used computational models of cognition. Evidence from the literature suggests that behavioural phenomena, such as recency and priming effects, and cognitive constraints, such as working memory limits, affect how language is produced by humans in ways that can be predicted by computational cognitive models. But just as computational models can provide new insights into language, language can serve as a test for these models. For example, simulating language learning requires the use of more powerful machine learning techniques, such as deep learning and vector symbolic architectures, and language comprehension requires a capacity for on-the-fly situational model construction. In sum, language plays an important role in both shaping the development of a common model of the mind, and, in turn, the theoretical understanding of language stands to benefit greatly from the development of a common model.
AB - What role does the study of natural language play in the task of developing a unified theory and common model of cognition? Language is perhaps the most complex behaviour that humans exhibit, and, as such, is one of the most difficult problems for understanding human cognition. Linguistic theory can both inform and be informed by unified models of cognition. We discuss (1) how computational models of human cognition can provide insight into how humans produce and comprehend language and (2) how the problem of modelling language processing raises questions and creates challenges for widely used computational models of cognition. Evidence from the literature suggests that behavioural phenomena, such as recency and priming effects, and cognitive constraints, such as working memory limits, affect how language is produced by humans in ways that can be predicted by computational cognitive models. But just as computational models can provide new insights into language, language can serve as a test for these models. For example, simulating language learning requires the use of more powerful machine learning techniques, such as deep learning and vector symbolic architectures, and language comprehension requires a capacity for on-the-fly situational model construction. In sum, language plays an important role in both shaping the development of a common model of the mind, and, in turn, the theoretical understanding of language stands to benefit greatly from the development of a common model.
UR - http://www.scopus.com/inward/record.url?scp=85059460790&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85059460790&partnerID=8YFLogxK
U2 - 10.1016/j.procs.2018.11.047
DO - 10.1016/j.procs.2018.11.047
M3 - Conference article
AN - SCOPUS:85059460790
SN - 1877-0509
VL - 145
SP - 724
EP - 729
JO - Procedia Computer Science
JF - Procedia Computer Science
T2 - 9th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2018
Y2 - 22 August 2018 through 24 August 2018
ER -