Eye-tracking for Interactive Computer Graphics

 

Carol O’Sullivan, John Dingliana, Gareth Bradshaw, Ann McNamara

Image Synthesis Group, Trinity College Dublin, Ireland

Email: Carol.OSullivan@cs.tcd.ie

 

Eye-tracking and eye-movements are becoming increasingly interesting to the computer graphics community, motivated by several factors: One potential use of eye-trackers, more related to the field of Human Computer Interaction (HCI) and interfaces for the disabled, is as an innovative interaction device; A second possibility, which is probably more compelling from a computer graphics perspective, is the use of eye-tracking for gaze-contingent applications. In such systems, a viewer’s fixations and saccades are recorded, and the resolution and quality of the resulting image or animation is adjusted accordingly. The final and perhaps least investigated approach involves using eye-movements to gain some insight into the way that people view synthesised images and animations, with the dual purpose of optimising perceived quality and making algorithms more efficient. Effectively, eye-movement analysis could be used to develop new metrics to evaluate the effectiveness of what we produce. In this endeavour, much interdisciplinary collaboration is necessary between computer scientists and psychologists.

 

Our research concentrates on the use of eye-tracking to enhance the perceived realism of our images and simulations, both by developing gaze-contingent algorithms and by analysing eye-movements. To evaluate simulations, we are currently investigating the relationship between eye-movements and dynamic events, such as collisions. A previous set of experiments, which investigated the functional field of view for collision events, showed that the ability to detect anomalous behaviour in colliding objects is reduced with increasing eccentricity and numbers of visually homogeneous distractors. Other effects were found but were not robust enough to use as general-purpose heuristics. Based on these results, a gaze-contingent application has been developed, where collisions happening within a circular region of the screen, centred on the point of collision, are handled at a higher resolution than those outside the region. The size of the circular region is dynamically resized based on the number of objects visible. This application now allows us to modify the behaviour of the objects outside the region of interest of the viewer, e.g. by allowing objects to merge with each other as opposed to bouncing off at a distance, and to observe viewers’ sensitivity to this type of change. Another ongoing project is to build a real scene, involving an avalanche of real rocks down the side of a scaled model of a mountain, and to compare people’s perception of this scenario with a simulated one. Eye-movement analysis will also play a central role in this investigation.