Purpose
To determine medical student preferences for learning the ocular fundus examination and to assess their accuracy using different examination modalities.
Design
Prospective, randomized study of medical student education approaches.
Methods
First-year medical students received training in direct ophthalmoscopy using simulators and human volunteers. Students were randomized to receive vs not receive specific training on interpreting fundus photographs prior to accuracy assessments. Students’ preferences for each of the 3 methods (direct ophthalmoscopy on simulators or human volunteers, or use of fundus photographs) and recognition of normal and abnormal fundus features were assessed.
Results
Of 138 first-year medical students, 119 (86%) completed all required elements. For learning ophthalmoscopy, 85 (71%) preferred humans to simulators. For learning relevant features of the ocular fundus, 92 (77%) preferred photographs to ophthalmoscopy on simulators or humans. Accuracy of answers was better when interpreting fundus photographs than when performing ophthalmoscopy on simulators ( P < .001). Performance improved after specific teaching about assessing fundus photographs before testing ( P = .02). Examination of the ocular fundus was found easier and less frustrating when using photographs than when using ophthalmoscopy on simulators or humans. Eighty-four students (70%) said they would prefer to have fundus photographs instead of using the ophthalmoscope during upcoming clinical rotations.
Conclusions
Students preferred fundus photographs for both learning and examining the ocular fundus. Identification of ocular fundus features was more accurate on photographs compared to examination by direct ophthalmoscopy. In the future, the increasing availability of nonmydriatic ocular fundus photography may allow replacement of direct ophthalmoscopy in many clinical settings for non-ophthalmologists.
Examination of the ocular fundus is critical to the accurate diagnosis of many life- and sight-threatening medical conditions, and there is consensus that all graduating medical students and generalist physicians should be proficient in the fundus examination. Standards adopted by the Association of University Professors in Ophthalmology (AUPO) and endorsed by the American Academy of Ophthalmology and International Council of Ophthalmology specifically require students to be able to visualize the red reflex, the retina, and optic disc; to assess the optic disc for cupping, color, contour, margins, vessels, and edema; and specifically to recognize changes associated with glaucoma and macular degeneration. Despite these recommendations, the fundus examination is performed infrequently and poorly by students and most non-ophthalmologists.
In response to students’ lack of proficiency in direct ophthalmoscopy, medical educators have primarily suggested 2 divergent approaches to the improvement of undergraduate ophthalmology education. Some have attempted intensive, longitudinal ophthalmology training in medical school, but this has failed to produce a meaningful improvement in students’ direct ophthalmoscopy skills, and students continued to neglect the fundus examination in their internal medicine clerkships. As an alternative, others have suggested that students master only basic direct ophthalmoscopy skills and instead focus on learning the signs of clinical ophthalmology emergencies and be aware of other forms of ocular fundus pathology, even those that are not readily detectable by direct ophthalmoscopy. However, new ocular fundus imaging technologies, such as nonmydriatic ocular fundus cameras, provide easy-to-use and reliable alternatives to direct ophthalmoscopy and may allow convergence of these 2 approaches. Removing the technical challenges of direct ophthalmoscopy should make examination of the ocular fundus easier and more effective, providing students additional time (currently spent on learning how to use the direct ophthalmoscope) to learn key ophthalmologic signs and pathology.
We propose that tomorrow’s clinicians may use ocular fundus photography instead of direct ophthalmoscopy for routine ophthalmic screening in appropriate clinical settings, and we sought to explore the integration of digital fundus photography into medical student education.
The aim of our study was to evaluate the capabilities of students to identify major anatomic features of the ocular fundus on eye simulators vs fundus photographs and to determine student preferences regarding educational techniques (humans vs eye simulators vs fundus photographs) and examination methods (direct ophthalmoscopy vs fundus photography).
Methods
Study Population
This was a prospective, randomized study that was deemed exempt by the Emory University Institutional Review Board. The study was in adherence with the tenets of the Declaration of Helsinki. Participants were all first-year medical students at Emory University. Inclusion criteria required informed consent, attendance at an introductory eye examination lecture, and a small-group skill training session in which students received further eye examination training and practiced direct ophthalmoscopy.
Overview of Protocol
First-year medical students attended a 45-minute introductory lecture on the eye examination. Subsequent instruction and evaluation was conducted in 2-hour small-group sessions. Each group was randomized into 1 of 2 training sequences (direct ophthalmoscopy skill training on human volunteers and then on anatomically and optically correct eye simulators, or vice versa). The groups were then randomized into 1 of 2 testing sequences, in which they either received or did not receive specific training on assessing fundus photographs prior to testing their abilities to identify ocular fundus features. Testing and surveys were performed at predefined points during the protocol. The protocol is outlined in Figure 1 .
Pretest Phase
The introductory lecture for the first-year medical school class taught visual pathway anatomy and the screening eye examination, including a short demonstration of direct ophthalmoscopy. Students then completed a 48-item pretest, which collected demographic data and information on prior exposure to ophthalmology; baseline diagnostic skills in interpreting fundus features were also determined by presenting 4 ocular fundus photographs with questions about the appearance of the optic nerve, retina, and blood vessels ( Supplemental Figure 1 , available at AJO.com ).
Training Sequences
For the remainder of the study, students were assembled in 16 groups of 8-10 students. Each group was taught by an ophthalmology-trained faculty member and training was standardized in each room. Each of the 16 sessions began with 5 minutes of standardized instruction on how to use the direct ophthalmoscope. Each group was randomized into 1 of 2 training sequences ( Figure 1 ). Training sequence 1 consisted of direct ophthalmoscopy skill training on healthy human volunteers (each had 1 dilated pupil to allow students to practice ophthalmoscopy with and without pupillary dilation) and then on anatomically and optically correct eye simulators. Training sequence 2 consisted of direct ophthalmoscopy skill training in the opposite order (eye simulators first, and then volunteers).
Eye simulators were placed in Styrofoam models in the shape of a human head in order to reproduce obstacles to ophthalmoscopy, such as the nose and hair ( Figure 2 ). Each eye simulator was constructed with an ocular fundus photograph glued to the inside bottom of a white polyethylene canister (similar to a photograph film canister). A 16-diopter convex lens was inserted in the mouth of the canister in order to reproduce an optically correct eye. A cover with a hole cut for a “pupil” was placed over the lens. Fundus photographs were correctly oriented in the heads to simulate right or left eyes. Each head (including 2 eye simulators) was accompanied by a printed clinical vignette that presented a short clinical history appropriate to its fundus findings and included fundus photographs identical to those inside the eye simulators, which highlighted important features to identify.
Twenty minutes were allotted for direct ophthalmoscopy training on humans and 20 minutes for the simulators. After each skill training portion, a quality survey was administered. Students were asked to rate, on 10-point Likert scales, ease of viewing the ocular fundus with an ophthalmoscope (10 = easiest) and level of frustration in attempting direct ophthalmoscopy (10 = most frustrated). They were also asked whether lack of time was the main source of frustration and whether they would spontaneously perform direct ophthalmoscopy routinely during their upcoming clinical clerkships. The quality surveys also included the validated Positive and Negative Affect Schedule (PANAS), consisting of 10 negative and 10 positive mood terms graded on a 5-point Likert response scale to document feelings engendered by the portion of training just completed ( Supplemental Figure 2 , available at AJO.com ). Scores were calculated by a simple sum of the 5-point Likert ratings for each mood term, as per the usual method. Positive affect scores range from 10-50, with higher scores representing higher levels of positive affect. Negative affect scores also range from 10-50, with lower scores representing lower levels of negative affect.
Testing Sequences
The students’ small groups were randomized into 1 of 2 testing sequences ( Figure 1 ). Testing sequence 1 began with a 10-minute diagnostic training slideshow with instruction on how to determine if a fundus photograph depicted the right or left eye and how to assess basic features of the optic nerve, retina, and blood vessels. Students then completed a simulator posttest and a photograph posttest as well as a quality survey. The simulator posttest consisted of 4 simulator eyes; for each eye, students were asked if they could visualize anything inside the eye simulator (1 of the 4 eyes was a picture of a vitreous hemorrhage where nothing could be visualized) and were questioned on the appearance of the optic nerve, retina, and blood vessels ( Supplemental Figure 3 , available at AJO.com ). The photograph posttest consisted of 4 fundus photographs printed on paper, 2 of which were identical to fundus photographs in the simulator posttest ( Supplemental Figure 4 , available at AJO.com ). Fifteen minutes were allotted for each posttest. The post-photograph quality survey asked the same questions as the post-simulator quality survey. In testing sequence 2, students completed the simulator posttest, photograph posttest, and post-photograph quality survey before receiving the diagnostic training slideshow.
Final Survey
A final quality survey was administered after all training and testing sequences were complete ( Supplemental Figure 5 , available at AJO.com ). This quality survey asked which methods students preferred for learning how to use the direct ophthalmoscope (human volunteers vs eye simulators with clinical vignettes) and for identifying features of the ocular fundus (ophthalmoscopy vs ocular fundus photographs). It also asked whether they would prefer to use direct ophthalmoscopy or fundus photographs when evaluating a patient during their clinical clerkships, and how often they believed they would attempt to evaluate the ocular fundus over the following year as part of general physical examinations.
Data Analysis
Medians and interquartile ranges (IQRs) were reported for continuous data and percentages were reported for categorical data. Statistical analysis was performed using random intercept (for subject, instructor, and day) mixed linear regression models for comparing test scores and 1-way analysis of variance with Tukey post hoc comparisons for affect scores using R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, http://www.R-project.org ).
Results
Of 138 first-year medical students enrolled in the Emory School of Medicine 2016 class, 132 consented to participate in the study and 119 (86%) completed all required elements. Fifty-nine of the 119 (50%) were women and the median age was 23. Eight students had completed prior courses related to ophthalmology (7 undergraduate and 1 graduate).
Performance Accuracy
The pretest and posttests (simulator and photograph) each had 48 items. Students answered an average of 28.8 questions correctly on the pretest (60%). They answered an average of 8.2 additional questions correctly on the simulator posttest (77% correct of 48) and 11.9 additional questions on the photograph posttest (85% correct of 48), both significantly better than the pretest ( P < .001). Performance on the posttest was significantly better using fundus photographs compared to simulators ( P < .001).
Students who received the diagnostic training slideshow before completing the simulator and photograph posttests scored an average of 1.7 more responses correctly than students who received the diagnostic training after the posttests ( P = .02). The order of training (human volunteers followed by simulators or vice versa) had no impact on posttest scores (0.38 more questions correct for training on simulators before human volunteers, P = .61).
Student Preferences
Student-rated ease and frustration for the different examination modalities (human volunteers, simulators, and photographs) are shown in Figure 3 . Limited time was identified as the primary source of frustration in examining human volunteers for 18 students (15%), in examining simulators for 21 students (18%), and in reviewing fundus photographs for 1 student (1%).