In order to assess the potential for visual inspection and eye-hand coordination without tactile feedback under conditions that may be available to future retinal prosthesis wearers, we studied the ability of sighted individuals to act upon pixelized visual information at very low resolution, equivalent to 20/2400 visual acuity. Live images from a head-mounted camera were low-pass filtered and presented in a raster of 6×10 circular Gaussian dots. Subjects could either freely move their gaze across the raster (free-viewing condition) or the raster position was locked to the subject's gaze by means of video-based pupil tracking (gaze-locked condition). Four normally sighted and one severely visually impaired subject with moderate nystagmus participated in a series of four experiments. Subjects' task was to count 1 to 16 white fields randomly distributed across an otherwise black checkerboard (counting task) or to place a black checker on each of the white fields (placing task). We found that all subjects were capable of learning both tasks after varying amounts of practice, both in the free-viewing and in the gaze-locked conditions. Normally sighted subjects all reached very similar performance levels independent of the condition. The practiced performance level of the visually impaired subject in the free-viewing condition was indistinguishable from that of the normally sighted subjects, but required approximately twice the amount of time to place checkers in the gaze-locked condition; this difference is most likely attributable to this subject's nystagmus. Thus, if early retinal prosthesis wearers can achieve crude form vision, then on the basis of these results they too should be able to perform simple eye-hand coordination tasks without tactile feedback.
ASJC Scopus subject areas
- Atomic and Molecular Physics, and Optics