Paragraph text reading using a pixelized prosthetic vision simulator: Parameter dependence and task learning in free-viewing conditions

Gislin Dagnelie, David Barnett, Mark S. Humayun, Robert W. Thompson

Research output: Contribution to journalArticlepeer-review

Abstract

PURPOSE. To investigate the feasibility of adequate reading by recipients of future prosthetic visual implants through simulation in sighted observers. METHODS. Four normally sighted subjects used a video headset to view short-story segments at a sixth grade reading level, presented in 6- to 11-word paragraphs through a pixelizing grid defined by five parameters (dot size, grid size, dot spacing, random dropout percentage, and gray-scale resolution). Grid parameters were varied individually, and four character sizes and two contrast levels were used. RESULTS. Reading speeds of 30 to 60 words per minute without errors were recorded for some parameter combinations. In general, reading accuracy and speed were influenced by all parameters. Reading accuracy exceeded 90% if the following conditions were met: At least 3 dots/charwidth were presented, and dropout did not exceed 50%. Reading speed deteriorated below 20 words per minute if accuracy fell below 90% and at low contrast if the grid spanned less than two characters. CONCLUSIONS. It is uncertain whether and to what extent retinal reorganization may limit the perception of multiple phosphenes by blind prosthesis recipients. If distinct phosphenes can be perceived, these results suggest that a 3 × 3-mm2 prosthesis with 16 × 16 electrodes should allow paragraph reading. The effects of stabilizing the dot grid on the retina must be investigated further.

Original languageEnglish (US)
Pages (from-to)1241-1250
Number of pages10
JournalInvestigative Ophthalmology and Visual Science
Volume47
Issue number3
DOIs
StatePublished - Mar 2006

ASJC Scopus subject areas

  • Ophthalmology
  • Sensory Systems
  • Cellular and Molecular Neuroscience

Fingerprint Dive into the research topics of 'Paragraph text reading using a pixelized prosthetic vision simulator: Parameter dependence and task learning in free-viewing conditions'. Together they form a unique fingerprint.

Cite this