Abstract
We propose a cognitive model to interact with interfaces. The main objective of cognitive science is understanding the nature of the human mind to develop a model that predicts and explains human behavior. These models are useful to Human- Computer Interaction (HCI) by predicting task performance and times, assisting users, finding error patterns, and by acting as surrogate users. In the future these models will be able to watch users, correct the discrepancy between model and user, better predicting human performance for interactive design, and also useful for AI interface agents. To be fully integrated into HCI design these models need to interact with interfaces. The two main requirements for a cognitive model to interact with the interface are (a) the ability to access the information on the screen, and (b) the ability to pass commands. To hook models to interfaces in the general way we work within a cognitive architecture. Cognitive architectures are computational frameworks to execute cognition theories - they are essentially programming languages designed for modeling. Prominent examples of these architectures are Soar [1] and ACT-R [2]. ACT-R models could access the world interacting directly with the Emacs text editor [3]. We present an initial model of eyes and hands within the ACT-R cognitive architecture that can interact with Emacs.
Original language | English (US) |
---|---|
Pages | 15-20 |
Number of pages | 6 |
State | Published - 2017 |
Event | 28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017 - Fort Wayne, United States Duration: Apr 28 2017 → Apr 29 2017 |
Other
Other | 28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017 |
---|---|
Country/Territory | United States |
City | Fort Wayne |
Period | 4/28/17 → 4/29/17 |
All Science Journal Classification (ASJC) codes
- Software
- Artificial Intelligence