Abstract
We present JSegMan, an approach to extend interaction in the ACTR cognitive architecture. JSegMan allows cognitive models to interact directlywith any application on a PC. JSegMan also simulates human visual attentionmovements. This work allows the behavior of models to be compared more directly to human behavior by using the same uninstrumented interface. Thiswork also provides direct support for automated user interface testing andclosed-loop system control. Furthermore, a new data structure-visual patterns-has been introduced to provide a more realistic representation of theworld. We tested JSegMan by using it with an existing ACT-R model of aspreadsheet task, the Dismal model. Because the model interacted directly witha spreadsheet, we found defects in the Dismal model and resolved them. Therevised model more accurately predicted a 20-min task while performing thetask.
Original language | English (US) |
---|---|
State | Published - Jan 1 2018 |
Event | 2018 International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction and Behavior Representation in Modeling and Simulation, BRiMS 2018 - Washington, United States Duration: Jul 10 2018 → Jul 13 2018 |
Conference
Conference | 2018 International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction and Behavior Representation in Modeling and Simulation, BRiMS 2018 |
---|---|
Country/Territory | United States |
City | Washington |
Period | 7/10/18 → 7/13/18 |
All Science Journal Classification (ASJC) codes
- Human-Computer Interaction
- Modeling and Simulation