To examine an image remapping method for peripheral visual field (VF) expansion with novel virtual reality digital spectacles (DSpecs) to improve visual awareness in glaucoma patients.
Prospective case series.
Monocular peripheral VF defects were measured and defined with a head-mounted display diagnostic algorithm. The monocular VF was used to calculate remapping parameters with a customized algorithm to relocate and resize unseen peripheral targets within the remaining VF. The sequence of monocular VF was tested and customized image remapping was carried out in 23 patients with typical glaucomatous defects. Test images demonstrating roads and cars were used to determine increased awareness of peripheral hazards while wearing the DSpecs. Patients’ scores in identifying and counting peripheral objects with the remapped images were the main outcome measurements.
The diagnostic monocular VF testing algorithm was comparable to standard automated perimetric determination of threshold sensitivity based on point-by-point assessment. Eighteen of 23 patients (78%) could identify safety hazards with the DSpecs that they could not previously. The ability to identify peripheral objects improved with the use of the DSpecs ( P = 0.024, chi-square test). Quantification of the number of peripheral objects improved with the DSpecs ( P = 0.0026, Wilcoxon rank sum test).
These novel spectacles may enhance peripheral objects awareness by enlarging the functional field of view in glaucoma patients.
Peripheral visual field (VF) defects are caused by many diseases, including glaucoma, retinitis pigmentosa, and strokes. Over 60 million patients suffer irreversible visual losses secondary to glaucoma. Patients with peripheral VF losses account for approximately 25% of those experiencing VF losses. Despite recent advances, no medical or surgical treatment can reverse existing damage. Patients sustain significant loss of their functional vision leading to a disability that dramatically affects many daily life activities and higher order visual processing skills. , Patients seek low-vision rehabilitation (LVR) and depend on visual aids to maximize their visual performance and improve their functionality and safety. , Nevertheless, current visual aids often fail to achieve the goals of safety and independence. , Additionally, the evaluation of the effectiveness of modern visual aids for patients with defective peripheral field is limited. ,
Current strategies to improve visual function offer patients either optical or electronic visual aids. Optical devices either minimize or relocate the remaining VF associated with peripheral defects and, in doing so, reduce the perceived resolution and frequently produce annoying image overlaps. , Electronic visual aids capture the environment with an electronic camera, process the acquired signal, and display the processed images to the patient. Currently available electronic low-vision aids primarily address central vision deficits and do not consider functional activities that depend on the peripheral vision. Some approaches apply electronic magnification of images and letters through head-mounted display (HMD) or closed-circuit TV (CCTV) to help patients with age-related macular degeneration (AMD). Other devices use basic image enhancement techniques such as contrast and brightness enhancement, zoom, edge sharpening adjustments, and color inversion, which makes text on white paper easier to read. , , However, the clinical trials and studies performed with those visual aids are limited, and only 2 studies reported their performances. , Culham and associates applied central vision tests such as reading, writing, and identifying objects to determine the utility of the currently available visual aids. Wittich and associates investigated a device that improved central vision-related tests; however, the tasks depending on the peripheral vision did not significantly improve with that aid. No guidelines or criteria for determining the suitability of a particular visual aid to a specific patient’s condition have been developed. This is crucially important when recommending visual aids to patients with peripheral visual defects. Thus, a need exists to develop a low-vision aid that overcomes the shortcomings reported in previous studies, particularly with respect to improving peripheral vision.
This paper introduces a novel concept in visual aids that uses digital spectacles (DSpecs) that measure the VF and use that measurement to apply a personalized vision augmentation profile based on the patient’s unique VF defects. Augmentation of the VF is done by real-time resizing and shifting of the patient’s scene to the remaining intact VF. To assess the value of DSpecs and, as a proof of concept, we tested the ability of patients to recognize hazardous objects located in the peripheral VF with and without the DSpecs, using static test images.
Subjects and Methods
We used a virtual reality head-mounted display (VR HMD) (HTC Vive, Xindian District, New Taipei City, Taiwan) with an integrated eye-tracking system (Tobii Technology, Danderyd, Sweden), to transmit gaze data for visual field testing. Infrared light-emitting diodes (LEDs) positioned around the display lenses and eye-tracking sensors located beneath each display lens enabled independent tracking of the right and left eyes ( Figure 1 A). Field of view of the display of the VR HMD and the eye-tracking system are 100° for each eye independently. The headset was tethered to a VR-enabled laptop (Core i7, 2.8 GHz Quad core, NVIDIA 1070, 32 GB RAM; Intel, Santa Clara, California) that runs the VF testing program and image remapping algorithms. These algorithms were implemented with MATLAB R2018b software (MathWorks, Inc., Natick, Massachusetts) and C# under Unity (Unity Technologies, San Francisco, California).
Before visual field testing, the eye tracker was calibrated with a 1-point strategy. This calibration process offsets the origin of the eye-tracking coordinate system to the central fixation point on the display screen. The VF test with 80° was performed, and unseen or partially seen test targets were displayed to the participants in areas of intact VF. The participants were asked to identify hazardous objects such as cars, with and without image remapping. Patient with prescription glasses wore their glasses during the test that fit inside the HMD device.
A prospective institutional review board approval was given from the University of Miami, and informed consent was obtained from all participating patients before commencing the study
The study was designed in accordance with the Declaration of Helsinki and was compliant with Health Insurance Portability and Accountability Act regulations.
Twenty-eight patients were recruited from glaucoma clinics at the Anne Bates Leach Eye Center, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, during their regular follow-up visits. Five patients were excluded from the study. Reasons for exclusion were unreliable VF testing results (n = 3) and inability to perform eye tracking due to pupil dilation (n = 2). Twenty-three patients (33 eyes) were included in the study. Those patients had previously performed standard automated perimetry (SAP) with the Humphrey 24-2 program (Zeiss Meditech, Dublin, California) that demonstrated typical glaucomatous visual field defects. Twenty-one patients were examined using the Swedish Interactive Threshold Algorithm-Standard strategy. The other 2 patients were tested using the Swedish Interactive Threshold Algorithm-Fast (n = 1), and FAST PAC (n = 1) strategies.
Measuring the Visual Field
The DSpecs measurement method of the visual field was based on the Humphrey standard automated static perimetry technique. Fast thresholding strategy was applied with 4 contrast staircase stimuli. The stimuli locations within the central 40° radius were tested with 52 stimuli sequences at the locations shown in Figure 1 B. The spacing between each stimulus location was 10°. Each stimuli sequence consisted of 4 consecutive stimuli that were presented at different contrast levels with respect to the background, ranging from 32 dB to 20 dB in steps of 4 dB, and the value of 0 dB was assigned if the patient did not see any stimulus contrast. The background had a bright illumination (100 lux), and the stimuli were dark dots with different contrast levels. A stimulus size of 0.43° was used, which is equivalent to the standard Goldmann stimulus size III. The stimulus size at the mid periphery (between 24 and 40° radius) was doubled to 0.86° to compensate for the display-degraded lens performance in the periphery and also in consideration of the decreased acuity of the normal human’s peripheral vision that drops to 20/270 at 25° and even worse acuity of 20/555 at 40° eccentricity. The testing program was performed with either a stimulus size III (0.43°) or size V (1.72°), according to the SAP testing parameters previously performed
Patients were asked to fix their eye under examination at a fixation target located at the center of the display. Patient fixation was monitored using the eye-tracking system at different time intervals. The testing program proceeded if the participant could fix on the central target. If fixation was not maintained, the program would be paused until fixation was restored. Patients responded to the stimuli by pressing a wireless clicker. Additionally, gaze directions were checked after receiving a response; whether positive (seen point) or negative (unseen point), to ensure proper fixation before recording the response, otherwise that particular stimulus location was repeated later during the test. Normal blind spots were scanned by showing suprathreshold stimuli at 4 different locations spaced by 1° in the 15-degree vicinity temporal to fixation. This step minimized rotational misalignments between the headset and the eyes. False positive responses, false negative responses, and fixation losses were calculated. To generate a VF display plot, the 52 responses were interpolated to generate a gray-scale printout.
VF tests made with the DSpecs for the 23 clinical patients were compared with the most recent Humphrey VF SAP tests. The common areas of the central 24° were matched and compared between the 2 VF-testing devices. The comparison and relative error calculations were based on a point-by-point basis at the common central 24° area. Considering the SAP test as the reference, the VF test error calculations were calculated by finding the mismatches between the 2 testing methods. More peripheral visual field areas were judged by absence of isolated response points.
Images were remapped by resizing and shifting geometric operations of the test image to fit into the intact areas of the measured VF. The remapping algorithm was based on the VF test results of the 52-point responses arranged in 8 × 8 matrices that were used to calculate new dimensions and a new center for the output images to be shown to the participants. First, the VF was binarized by setting all “seen” responses to ones and “unseen” responses to zeros; this resulted in a binary image size of 8 × 8. Figure 2 A shows an example of such a binary image for a right eye examination with peripheral defects in the upper hemifield. The 12 corner points marked with the gray color are untested points and were given a value of zero. Afterward, all small regions consisting of no more than 4 connected “unseen” pixels, were removed from the binary visual field image. Those small regions were not considered in the image fitting process ( Figure 2 B). The ignored small unseen regions represent either the normal blind spots, insignificant defects (of up to 8% of the total number of testing points), or any subjective random erroneous responses that might have occurred during the patient’s VF test. Our aim of eliminating these small unseen regions from the calculations was to remap the test images as large as possible inside the largest intact VF area and reduce the negative effect of excessive image minification and translation operations that might be produced by the remapping algorithm. With a trial and error approach, elimination of the 4 connected unseen points provided a satisfactory image remapping performance.
We applied a two-dimensional cubic interpolation to the modified binary VF image to achieve the desired image size output ( Figure 3 A). This resized visual field image was used as a template to map the test images into the largest intact region of the visual field; marked regions with pixel values of 1. Based on this interpolated binary field image, the intact field’s region properties were calculated. These region properties were: 1) intact areas in units of pixels squared; 2) region bounding box; 3) weighted area centroid; and 4) a list of all pixels constituting the intact regions of the VF. A bounding box is the smallest rectangle enclosing all pixels constituting the intact region. The bounding box of the example in Figure 3 A is shown in Figure 3 B marked with a red box. The dimensions of the bounding box were used in the resizing phase of the remapping function. A region centroid is the center of mass of that region calculated in terms of horizontal and vertical coordinates ( Figure 3 B). The values of this property corresponded to the output image’s new center; that is, the amount of image shift required for remapping. Weighted area centroids take into account the location and intensity values of those regions, which resulted in a more accurate estimate for the output image’s new center.
If multiple intact regions existed in the field and based on the intact regions area property, only one area is used for mapping. The intact areas were sorted from largest to smallest, and only the largest area was selected to fit the input image. This ensures a maximized utilization of the remaining intact field. Then, using the list of pixels constituting the largest intact field, the widths and heights of all pixels bounding it were calculated ( Figure 3 C). For each row in the intact field, the two bounding pixels were located, and their vertical coordinates were subtracted to calculate the field width, BF width , at that specific row. This width calculation was iterated for all rows thereby establishing the considered intact field to calculate BF widths . The same iteration was applied on a column basis to calculate BF heights . Afterward, a scaling equation was used to determine the new size of the remapped output image: Width remap and Height remap . The resizing function was: