Visualizing Residual Vision | NYU Tandon School of Engineering

Visualizing Residual Vision

Eye-Tracking and Computer Vision in Urban Navigation

Health & Wellness,
Urban


Project Sponsor:

John-Ross Rizzo, Associate Professor at NYU Tandon and NYU Langone

MENTOR:

Junchi Feng, PhD Candidate at NYU Tandon


Authors

Zipei Zhao, Yan Li, Tianshu Shi


Research Question

How do individuals with different types of vision loss use their residual vision during real-world navigation, and how do their gaze behaviors differ from those of fully sighted individuals? Are key objects being missed or overemphasized, and can these findings inform personalized assistive technologies?


Background

This project investigates how individuals with visual impairments use their residual vision while navigating urban environments. Using eye-tracking data collected from both visually impaired and fully sighted participants walking the same city route, students analyze gaze behavior through object-level semantic segmentation. The project identifies key differences in visual attention, assesses whether critical environmental cues are missed by the visually impaired, and explores adaptive gaze strategies. The findings support the development of more personalized assistive technologies and urban design recommendations.


Methodology

Participants with visual impairment wear eye-tracking glasses and are asked to walk a 0.5-mile urban route. All participants traversed the same environment, encountering a variety of static and dynamic obstacles typical of NYC streets, including crosswalks, scaffolding, tree guards, poles, and pedestrians. Data collection is completed.

The Rizzo's Lab previous study has revealed that fully sighted individuals exhibit a consistent “T-shaped” fixation pattern, whereas visually impaired individuals show more variable and individualized gaze behaviors. The variability in fixation patterns suggests a need for deeper analysis of what objects are attended to or overlooked during navigation.

In this capstone project, students:

  • Clean and preprocess fixation and video data
  • Apply semantic segmentation to annotate objects in each scene
  • Match fixations to segmented objects
  • Compare the object-based fixation distribution across participant groups
  • Visualize and statistically analyze how different vision conditions affect gaze allocation during real-world mobility
  • Analyze the intersection between trajectory data and residual gaze to understand how navigation paths correlate with gaze strategies

This work helps uncover how residual vision is deployed during complex mobility tasks and generates insights for future assistive tech and rehabilitation training.


Deliverables
  • Statistical report comparing object-based fixation behaviors between participant groups
  • Design or policy recommendations for assistive technology for urban environments
  • Integrated analysis report linking gaze behaviors to navigation trajectories, highlighting implications for personalized assistive technologies and urban planning.

Data Sources

The Rizzo Lab is providing:

  • Existing eye-tracking data: Fixation coordinates, elevation/azimuth, timestamps, and head pitch
  • Scene camera footage: First-person video from the eye tracker
  • Gaze-enriched metadata: Preprocessed data from Pupil Cloud, including timestamped fixation events