A Digital Microscreen for the Enhanced Appearance of Ocular Prosthetic Motility (an American Ophthalmological Society Thesis)





Purpose


This study aims to improve the apparent motility of ocular prosthetic devices using technology. Prevailing ocular prostheses are acrylic shells with a static eye image rendered on the convex surface. A limited range of ocular prosthetic movement and lack of natural saccadic movements commonly causes the appearance of eye misalignment that may be disfiguring. Digital screens and computational systems may obviate current limitations in eye prosthetic motility and help prosthetic wearers feel less self-conscious about their appearance.


Methods


We applied convoluted neural networks (CNNs) to track pupil location in various conditions. These algorithms were coupled to a microscreen digital prosthetic eye (DPE) prototype to assess the ability of the system to capture full ocular ductions and saccadic movements in a miniaturized, portable, and wearable system.


Results


The CNNs captured pupil location with high accuracy. Pupil location data were transmitted to a miniature screen ocular prosthetic prototype that displayed a dynamic contralateral eye image. The transmission achieved a full range of ocular ductions and with grossly undetectable latency. Lack of iris and sclera color and detail, as well as constraints in luminosity, dimensionality and image stability limited the real eye appearance. Yet, the digitally rendered eye moved in the same amplitude and velocity as the native, tracked eye.


Conclusions


Real-time image processing using CNNs coupled to microcameras and a miniscreen DPE may offer improvements in amplitude and velocity of apparent prosthetic eye movement. These developments, along with ocular image precision, may offer a next-generation eye prosthesis. NOTE: Publication of this article is sponsored by the American Ophthalmological Society.


E ach year, approximately 50,000 individuals lose an eye. Common reasons for removal of the entire eye (enucleation) or removal of the intraocular contents (evisceration) include ocular tumors, trauma, pain, phthisis bulbi, or infection.


Loss of an eye due to trauma or disease can lead to disfigurement and anxiety. Linberg and associates reported that concerns with self-image and anxiety were common in the anophthalmic population. Coday and associates found 40% of patients with acquired anophthalmia or monocular vision believed that their condition affected them socially. Patients reported feeling as if they were judged by their appearance rather than their personality, depression, need for professional counseling, loss of self-esteem, and decreased participation in social events, with alcoholism in at least 1 patient. Furthermore, 23% of participants reported that their employment status was affected, with 6% necessitating a job change. Beyond psychosocial factors, enucleation or evisceration is associated with poorer health-related quality of life, poorer self-rated health, and more perceived stress than the general population.


In an effort to restore ocular and facial appearance, removal of an eye is often coupled with placement of an orbital implant and fitting of an ocular prosthetic shell. The orbital implant is used to replace volume lost from the absence of the globe and to support an overlying ocular prosthesis. A variety of materials in the form of a sphere or various other shapes have been implanted into the orbit in this context. , Most surgical techniques include preservation of native rectus extraocular muscles with a goal of some movement of the prosthetic eye to improve its realism.


Efforts to achieve prosthetic motility include coupling underlying orbital movement to the prosthetic shell with or without a direct fixation mechanism. With direct fixation systems, plastic or titanium pegs are directly drilled into porous orbital implants and coupled outside the orbit to a prosthetic shell. Peg systems improve motility, but complication rates from pegging vary widely from 7% to up to 67%; the most common include infection, discharge, and pyogenic granulomas. Hence, pegging has fallen out of favor with most surgeons.


Prevailing ocular prosthetic devices are thin acrylic half-shells ranging 1.5-15 mm in thickness. The iris as well as scleral vessels and other ocular details are hand rendered onto the anterior aspect of the device. More recently, photo matching technology has allowed for high-definition photographs of the contralateral healthy eye to be mirrored, modified for brightness and contrast, printed onto photo paper, and then transferred to acrylic shells via sublimation heat transfer or similar techniques. With this method, both the photographic iris and the scleral vessels are directly applied to the prosthetic surface. This technology can be combined with custom fitting of an ocular prosthetic shell into an anophthalmic socket. With photo matching or traditional hand rendering, detailed eye replication is achieved in many cases, but poor motility limits the realism of the prosthetic.


With or without a peg fixation system, some eye motility is usually achieved with preservation of underlying socket movement that causes subtle prosthetic tilt or shift. However, the apparent eye movement is usually incomplete and causes an unsatisfactory appearance of eye misalignment, especially in nonprimary gaze. In addition, limitations in the velocity of movements do not recapture saccadic or fine darting eye movements that are common especially during conversation.


The smartwatch revolution has fueled innovation in high resolution, high pixel density bitmap graphics sharply displayed on a tiny screen. Light-emitting diode (LED) technology allow for single-layer screens that can be cut to small sizes with preserved image resolution and view angles. Another generation of LED displays uses plastic substrates to create curved screens that are thin and lightweight. Although digital irises have been inset into ocular prosthetics to allow for dynamic pupillary function, the authors are not aware of a successful digital prosthetic with pupil tracking technology allows for conjugate eye movements.


Eye or pupil tracking has been performed for centuries in the context of psychology and visual system research; much modern inquiry is focused on human-computer interactions. Two forms of pupil tracking have been developed based on the location of the sensor. Remote tracking uses a sensor far from the target individual (such as camera on a computer or elsewhere in the room or general vicinity) and can be combined with face detection and alignment, saliency detection, or other geometric information. The advantage of this approach is that it is very robust to challenging images (eg, highly rotated pupil or partially blocked pupil) and is less sensitive to noise. Another approach is head-mounted pupil tracking where the image of the eye is obtained from a device fixated to an individual’s head. Here, tracking is achieved by using lower-level characteristics of the image such as blob detection, edge detection, or a combination of multiple low-level features. This method is more convenient because it is portable, but is more sensitive to noise as well as challenges, namely, subjects moving within and between environments, and sensor/camera instability. Furthermore, poor illumination, eyelashes covering the pupil, off-axis reading, reflections, and eyelid or eyeglass frame obscuration of the pupil are examples of challenges with head-mounted systems ( Figure 1 ). Prior head-mounted pupil tracking software has used iris data sets (eg, CASIA or UBIRIS ) that were captured mostly in well-illuminated and controlled conditions, unlike images captured with a wearable device. These pre-existing pupil tracking algorithms inadequately supplied input for a digital ocular prosthetic system in our initial experiments.




Figure 1


Challenges in pupil detection: (a) strong illumination; (b) weak illumination; (c) eyelashes covering the pupil; (d) highly off-axis; (e) reflections; (f) pupil at eye border; (g) pupil at image border; and (h) pupil at border of glasses covering the pupil.


Machine learning is increasingly used in bioinformatics. These algorithms can harness vast computational power and enormous storehouses of image data to create deep neural networks capable of identifying image location with high accuracy. Application of neural networks to pupil tracking is a novel method for high-fidelity, minimal-latency identification of pupil position. Specifically, convoluted neural network (CNN)–based algorithms are an emerging trend for object and pupil tracking. Prior work using CNN approaches (eg, PupilNet and PupilDeconvNet ) find a region (ie, a feature) of an image that most likely contains the pupil and then refines this selection by running the network again and again in an end-to-end approach. One limitation of end-to-end approaches is the reliance on only one feature for pupil selection. Machine learning algorithms may more robustly detect pupil location, especially with head-mounted data acquisition systems.


In this report, we test the hypothesis that artificial intelligence (AI) can be incorporated into a miniaturized, wireless, portable, and head-mounted system that can receive real-time input for simultaneous display of full contralateral eye rotations on a digital microscreen ocular prosthesis device. Our main purpose is to develop a next-generation ocular prosthetic that more completely mimics eye motility in amplitude and velocity. Herein, we designed, created, and tested a wearable digital ocular prosthetic device.


Methods


The Institutional Review Board at the University of California, Irvine waived the need for approval of this experimental and computational research. Research was performed with adherence to the Declaration of Helsinki and all federal and California state laws. Consent was obtained for the use of facial images.


We applied CNNs to track pupil location. The CNN included several pupil detection algorithms that were tested for percentage pixel error on images from the Labeled Pupil in the Wild (LPW) database of pupil images in a variety of environmental settings. The CNN was coupled to a microscreen digital prosthetic eye (DPE) prototype consisting of a spectacle-mounted pupil detection camera and a digital microscreen display with a processor attached. The display and processor were housed within a clear conformer that was placed in a silicone eyelid and socket replica casing. We assessed the ability of the system to grossly capture full ocular ductions and darting eye movements in the portable and wearable system. Particulars are as follows.


Pupil Detection Subsystem


The Pupil Core eye-tracking headset was obtained from Pupil Labs (Berlin, Germany). This headset features a pair of 3D-printed spectacles with an infrared 200Hx monocular camera mounted on the inferior lateral aspect of the glasses frame ( Figure 2 ). The infrared camera records pupil movements and sends images to a processor for pupil detection attached to the headset. The processor’s software features an application program interface (API) that allows for custom features to be added via a plugin.




Figure 2


Pupil Labs Pupil Core 3D-printed spectacles with a monocular infrared camera mounted to the inferior lateral aspect of the frame.


Iris and Pupil Display Subsystem


The Tinyscreen OLED Tinyshield digital microscreen (Chesterfield, United Kingdom) is a 96 × 64-pixel organic light-emitting diode display (OLED) that is 25.8 × 25.0 mm in size and, to the author’s knowledge, the smallest screen on the market at the time of this investigation. An Atmel | SMART SAM D21ARM (San Jose, California, USA) processor was attached to the back of the screen for pupil display. This was a low-power microcontroller using the 32-bit ARM Cortex-M0+ processor with up to 256-KB Flash and 32 KB of static random access memory. The devices operated at a maximum frequency of 48 MHz and reach 2.46 CoreMark/MHz.


CNN Pupil Tracking


Data Sets of Pupil Images


Pupil images used to train the CNN were obtained from various publicly available databases, including those with noisy iris images and those that are highly off axis. , , , , The data set used to test the CNN was the LPW, obtained from images recorded by the Pupil Labs head-mounted tracking spectacles in a prior study. In 2016, Tonsen and colleagues recorded a pupil data set in “wild” conditions, capturing pupil images in different illuminations, at natural eccentric gazes, across ethnicities and with glasses, mascara, and other foreign objects in view ( Figure 1 ). LPW was chosen to test the algorithm as since its images most closely approximated those obtained from head-mounted tracking for a DPE.


Proposed Algorithm


The proposed algorithm combined 3 different features for pupil detection. Initially, the pupil image was transferred into a gray scale image. Three separate pupil feature detection algorithms, one designed to detect blobs, the other edges, and lastly pupil movement/position (see below for more detail), ran independently and concurrently. Each algorithm detected different aspects of the pupil; therefore, there were 3 separate attempts to identify the pupil. If one algorithm had lower accuracy because of the conditions of the image, another algorithm was selected.


Blob Feature Detection


A blob feature detection system was designed. In addition to blob detection of the dark pupil, the algorithm was modified to include separate corneal light reflex blob detection, and the center of gravity of the 2 blobs was calculated. First, a simple blob detection (SBD) algorithm was selected from an OpenCV library. Briefly, SBD iteratively extracts connected areas and calculates centers according to defined thresholds. To find the center of each circular area, the algorithm used different features to define the edge selection, including color, area, circularity, inertia ratio, and convexity. To determine the center of gravity of the 2 blobs (dark pupil and white corneal light reflex), the SBD algorithm was modified as follows.


<SPAN role=presentation tabIndex=0 id=MathJax-Element-1-Frame class=MathJax style="POSITION: relative" data-mathml='{X=∑i=0nxiri2∑i=0nri2,(x0−xi)2+(y0−yi)2&lt;r0,i∈N*Y=∑i=0nyiri2∑i=0nri2,(x0−xi)2+(y0−yi)2&lt;r0,i∈N*{X=x0,(x0−xi)2+(y0−yi)2≥r0,i∈N*Y=y0,(x0−xi)2+(y0−yi)2≥r0,i∈N*’>𝑋=𝑛𝑖=0𝑥𝑖𝑟2𝑖𝑛𝑖=0𝑟2𝑖,(𝑥0𝑥𝑖)2+(𝑦0𝑦𝑖)2<𝑟0,𝑖𝑁*𝑌=𝑛𝑖=0𝑦𝑖𝑟2𝑖𝑛𝑖=0𝑟2𝑖,(𝑥0𝑥𝑖)2+(𝑦0𝑦𝑖)2<𝑟0,𝑖𝑁*𝑋=𝑥0,(𝑥0𝑥𝑖)2+(𝑦0𝑦𝑖)2𝑟0,𝑖𝑁*𝑌=𝑦0,(𝑥0𝑥𝑖)2+(𝑦0𝑦𝑖)2𝑟0,𝑖𝑁*{X=∑i=0nxiri2∑i=0nri2,(x0−xi)2+(y0−yi)2<r0,i∈N*Y=∑i=0nyiri2∑i=0nri2,(x0−xi)2+(y0−yi)2<r0,i∈N*{X=x0,(x0−xi)2+(y0−yi)2≥r0,i∈N*Y=y0,(x0−xi)2+(y0−yi)2≥r0,i∈N*
{X=∑i=0nxiri2∑i=0nri2,(x0−xi)2+(y0−yi)2<r0,i∈N*Y=∑i=0nyiri2∑i=0nri2,(x0−xi)2+(y0−yi)2<r0,i∈N*{X=x0,(x0−xi)2+(y0−yi)2≥r0,i∈N*Y=y0,(x0−xi)2+(y0−yi)2≥r0,i∈N*

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Nov 5, 2021 | Posted by in OPHTHALMOLOGY | Comments Off on A Digital Microscreen for the Enhanced Appearance of Ocular Prosthetic Motility (an American Ophthalmological Society Thesis)

Full access? Get Clinical Tree

Get Clinical Tree app for offline access