TY - GEN
T1 - Modeling visual search in interactive graphic interfaces
T2 - 16th International Conference on Cognitive Modeling, ICCM 2018
AU - Tehranchi, Farnaz
AU - Ritter, Frank E.
N1 - Funding Information:
This work was funded partially by ONR (N00014-15-1-2275). David Reitter provided useful comments on Emacs and Aquamacs (the Emacs version for Mac). We wish to thank Jong Kim who provided the idea for ESegMan and Dan Bothell for his assistance with ACT-R.
Publisher Copyright:
© 2020 The Authors. Published by Elsevier Ltd.
PY - 2018
Y1 - 2018
N2 - We provide an update on JSegMan, an interactive system to extend the ACT-R cognitive architecture to interact with dynamic interfaces based on the screen contents and generating input for the operating system directly. Current ACT-R models typically interact with the world through ACT-R's device interface?an abstract representation of the world that is based on a simulated Lisp environment provided with ACT-R, or by instrumenting interfaces. In JSegMan, computer vision pattern matching algorithms and visual patterns extend the ACT-R cognitive architecture. With JSegMan, models directly move the cursor on the screen, click on application GUI objects on PCs, and type through the use of existing Java libraries. Implementing users' visual search strategies and input abilities for different visual objects enables the detailed modeling of interactive tasks on any interface. The visual pattern matching algorithms serve two goals: to simulate user behavior in interactive tasks and to create representations of visual stimuli. We tested our visual pattern matching approach by using it with an existing model for a long spreadsheet task. We found that the revised model more accurately predicted a 20-min task by entirely performing the task on an uninstrumented and unmodified interface.
AB - We provide an update on JSegMan, an interactive system to extend the ACT-R cognitive architecture to interact with dynamic interfaces based on the screen contents and generating input for the operating system directly. Current ACT-R models typically interact with the world through ACT-R's device interface?an abstract representation of the world that is based on a simulated Lisp environment provided with ACT-R, or by instrumenting interfaces. In JSegMan, computer vision pattern matching algorithms and visual patterns extend the ACT-R cognitive architecture. With JSegMan, models directly move the cursor on the screen, click on application GUI objects on PCs, and type through the use of existing Java libraries. Implementing users' visual search strategies and input abilities for different visual objects enables the detailed modeling of interactive tasks on any interface. The visual pattern matching algorithms serve two goals: to simulate user behavior in interactive tasks and to create representations of visual stimuli. We tested our visual pattern matching approach by using it with an existing model for a long spreadsheet task. We found that the revised model more accurately predicted a 20-min task by entirely performing the task on an uninstrumented and unmodified interface.
UR - http://www.scopus.com/inward/record.url?scp=85058192574&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85058192574&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85058192574
T3 - Proceedings of ICCM 2018 - 16th International Conference on Cognitive Modeling
SP - 162
EP - 167
BT - Proceedings of ICCM 2018 - 16th International Conference on Cognitive Modeling
A2 - Juvina, Ion
A2 - Houpt, Joseph
A2 - Myers, Christopher
PB - University of Wisconsin
Y2 - 21 July 2018 through 24 July 2018
ER -