A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment

Tarkeshwar Singh, Christopher M. Perry, Troy M. Herter

Research output: Contribution to journalArticlepeer-review

8 Scopus citations


Background: Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Results: Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. Conclusions: The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

Original languageEnglish (US)
Article number10
JournalJournal of NeuroEngineering and Rehabilitation
Issue number1
StatePublished - Jan 26 2016

All Science Journal Classification (ASJC) codes

  • Rehabilitation
  • Health Informatics


Dive into the research topics of 'A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment'. Together they form a unique fingerprint.

Cite this