Accuracy and Speed of Electronic Health Record Versus Paper-Based Ophthalmic Documentation Strategies




Purpose


To compare accuracy and speed of keyboard and mouse electronic health record (EHR) documentation strategies with those of a paper documentation strategy.


Design


Prospective cohort study.


Methods


Three documentation strategies were developed: (1) keyboard EHR, (2) mouse EHR, and (3) paper. Ophthalmology trainees recruited for the study were presented with 5 clinical cases and documented findings using each strategy. For each case-strategy pair, findings and documentation time were recorded. Accuracy of each strategy was calculated based on sensitivity (fraction of findings in actual case that were documented by subject) and positive ratio (fraction of findings identified by subject that were present in the actual case).


Results


Twenty subjects were enrolled. A total of 258 findings were identified in the 5 cases, resulting in 300 case-strategy pairs and 77 400 possible total findings documented. Sensitivity was 89.1% for the keyboard EHR, 87.2% for mouse EHR, and 88.6% for the paper strategy (no statistically significant differences). The positive ratio was 99.4% for the keyboard EHR, 98.9% for mouse EHR, and 99.9% for the paper strategy ( P < .001 for mouse EHR vs paper; no significant differences between other pairs). Mean ± standard deviation documentation speed was significantly slower for the keyboard (2.4 ± 1.1 seconds/finding) and mouse (2.2 ± 0.7 seconds/finding) EHR compared with the paper strategy (2.0 ± 0.8 seconds/finding). Documentation speed of the mouse EHR strategy worsened with repetition.


Conclusions


No documentation strategy was perfectly accurate in this study. Documentation speed for both EHR strategies was slower than with paper. Further studies involving total physician time requirements for ophthalmic EHRs are required.


Advances in information technology have provided opportunities for instant data access and communication. Through these technologies, the electronic health record (EHR) has become a critical strategy for medical care reform in the United States, with the goal of improving quality and safety of delivery, while decreasing costs. These systems have potential to allow efficient storage, retrieval, and analysis of medical data that would be impossible with traditional paper-based records. To promote EHR adoption, the Health Information Technology for Economic and Clinical Health Act (HITECH) of 2009 offers financial incentives for hospitals and health providers who demonstrate so-called meaningful use of these systems in their practices.


However, EHR adoption in the American healthcare system has been limited, particularly in office-based practices. Among ambulatory practices across all specialties in 2008, only 4% reported having an extensive EHR, whereas 13% reported having a basic system. Among ophthalmology practices, the EHR adoption rate has been similarly low. A survey of American Academy of Ophthalmology members in 2008 revealed that only 12% of practices had implemented an EHR system. Potential barriers to EHR adoption in ophthalmology have been attributed to the complex medical and surgical components of the field, unique workflow, and documentation requirements with heavy emphasis on graphical representation of examination findings, high clinical volume, and heavy reliance on ancillary imaging devices with data transfer challenges. A recent survey conducted by a major research firm among physicians across numerous specialties found that ophthalmologists had the lowest level of satisfaction with their EHR systems.


Despite this ongoing shift toward EHR adoption, virtually all practicing ophthalmologists learned to document ophthalmic examination findings onto standardized paper-based templates. One key challenge is to translate these paper templates to computer-based EHR forms that are sufficiently accurate and efficient. An additional challenge is that most ophthalmologists are accustomed to documenting examination findings using hand-drawn diagrams, which may vary even among different ophthalmologists and often are difficult to implement with EHR systems. Most EHR documentation strategies rely on keyboard and mouse-based input methods, which inherently are more restrictive compared with hand-drawn sketches on paper. For example, mouse-based input typically requires selection of common prepopulated examination findings via menu widgets, whereas keyboard-based input provides textbox widgets for free text entry. Although these selection-based EHR methods potentially allow for standardized nomenclature for information exchange, they do not permit spatial representation of data, which is traditionally important in the ophthalmic examination. These limitations of EHR input methods may affect the quality and speed of clinical documentation.


The influence of EHR data entry strategies on ophthalmic documentation is not widely documented. Better understanding of this relationship will permit future design of more efficient EHR data entry strategies by ophthalmologists. Overcoming the challenges of EHR data entry will result in longer-term benefits such as potential for data retrieval, data analysis (e.g., serial intraocular pressure, large-scale registries), and quality improvement. This pilot study was designed to evaluate the accuracy and speed of keyboard and mouse EHR documentation strategies, compared with a traditional paper strategy, for ophthalmic examination documentation. Ophthalmologists were presented standardized clinical cases, and the documentation process was analyzed using time-motion research methods.


Methods


This prospective cohort study was approved by the Institutional Review Boards at Columbia University Medical Center and Oregon Health & Science University. Data collection and analysis were performed at these institutions.


Subjects and Clinical Cases


Eligible study subjects were defined as ophthalmology residents or fellows and were recruited from the New York City area. The decision was made to restrict subjects to trainees because it was believed that this would produce a more homogeneous group for analysis, with less potential confounding from variability in age, experience, or computer skills compared with a group that included senior ophthalmologists. Written informed consent was obtained from all subjects before participation.


Five representative clinical cases were selected for this study from a publicly available website ( http://webeye.ophth.uiowa.edu/eyeforum/cases.htm ) that displayed grand rounds-type ophthalmology scenarios. The text of each case was read verbally and was audiotaped by one of the authors (P.J.T.) for standardized presentation to study participants. Finally, each case was reviewed in detail, and each discrete examination finding was identified by consensus of all authors for subsequent data analysis. For example, an examination significant for microcystic edema and guttatae in the cornea and 1+ anterior chamber cell was considered to have 4 distinct examination findings: edema, guttatae, 1+, and cell. Negative examination findings were included if the absence of features was described explicitly in the case, but were not considered if their absence was not presented explicitly in the case. The total number of examination findings was tabulated for each of the 5 cases.


Development of Representative Documentation Strategies


Three documentation strategies were developed for this study: (1) a keyboard EHR strategy ( Figure 1 ), which provided a textbox-based widget for entry of all examination findings; (2) a mouse EHR strategy ( Figure 2 ), which consisted of a series of prepopulated drop-down widgets containing common ophthalmologic findings for selection; and (3) a paper strategy ( Figure 3 ), which allowed for hand-drawn sketches for diagramming the ophthalmic examination. To isolate the documentation strategies tested and to minimize variability (network lag, proprietary data storage formats, extraneous menus, etc.) found in commercial EHRs, the decision was made to create custom EHR strategies for this study.




Figure 1


Representative interface for keyboard electronic health record (EHR) documentation strategy of the ophthalmic examination developed by authors for this study. Textboxes are used for keyboard input of ophthalmic examination findings. Tab key or mouse is used to advance to the next box. OD = right eye; OS = left eye; SLE = slit lamp examination.



Figure 2


Representative interface of a mouse electronic health record (EHR) documentation strategy of the ophthalmic examination developed by authors for this study. Checkboxes (e.g., presence of guttae) and pull-down menus (e.g., severity from 1+ to 4+) with common ophthalmic examination findings are used for selection. Comment box is used for free text entry of additional findings. Scrollbar (right side) is used to advance the interface. OD = right eye; OS = left eye.



Figure 3


Paper-based strategy for documentation of the ophthalmic examination used for this study. Preprinted templates allow for spatial arrangement of ophthalmic examination findings to be hand written or hand drawn. A/C = anterior chamber; ACIOL = AC intraocular lens; Conj = conjunctiva; Ext = external structures; OD = right eye; OS = left eye; PCIOL = posterior chamber intraocular lens.


The 2 EHR strategies were created using a software development kit (Visual Basic 2008; Microsoft, Redmond, Washington, USA) and were based on common data entry methods familiar to the authors from commercial ophthalmology EHR systems. The paper strategy was derived from a standard paper-based template used by several faculty providers in the Columbia University Department of Ophthalmology, which was not used routinely for clinical documentation by any of the subjects recruited for this study.


Presentation of Cases


Each subject was given a 5-minute standardized tutorial on the use of each documentation strategy by the authors (P.C., P.J.T.). Additionally, subjects were asked to rate their comfort level with computers (novice, intermediate, advanced), whether they had used computers during their premedical training, and whether they were using EHRs in clinical practice. Each subject was presented with 5 clinical cases and was asked to document pertinent examination findings using each of the 3 strategies, resulting in 15 case-strategy pairs per subject. Each audiotaped case was presented immediately before documentation with the individual strategy and was repeated for each strategy. The subjects were instructed not to begin documenting until the presentation of the case had been completed. If requested by the subject, portions of the cases were repeated verbally for comprehension. To avoid any systematic bias from order of presenting the 3 cases and 5 strategies to each subject, all cases were presented sequentially along with each strategy in the same order for each subject. Each subject was presented with case 1 using keyboard EHR, case 2 using mouse EHR, case 3 using paper, case 4 using keyboard EHR, case 5 using mouse EHR, case 1 using paper, case 2 using keyboard EHR, and so forth. Documentation time was recorded using a digital timer controlled by the authors.


Data Analysis


Accuracy of documentation was assessed based on sensitivity and positive ratio (PR) of documentation compared with what was truly present in the actual audiotaped case presented to subjects. Sensitivity for each case-strategy pair was calculated as the number of findings identified by the subject that were truly present in the actual case, divided by the total number of actual findings in the case. For example, an examination with 4 findings significant for 2 quadrants of dot blot hemorrhages with clinically significant macular edema in the left eye described by the subject as dot blot hemorrhages with clinically significant macular edema would have a sensitivity of 2 of 4 (50%). PR for each case-strategy pair was calculated as the number of findings identified by the subject that were truly present in the actual case, divided by the number of positive findings reported by the subject. For example, an examination case that was truly significant for 2+ nuclear sclerosis (2 true findings: nuclear sclerosis, 2+ modifier) in which the subject identified as 2+ nuclear sclerosis with cortical changes (3 reported findings: nuclear sclerosis, cortical changes, 2+ modifier) would have PR of 2 of 3 (67%).


Speed of documentation for each case-strategy pair was defined as the time required to document each examination finding and was calculated by dividing the documentation time of the entire case by the number of findings identified by the subject. For example, a documentation time of 3 seconds for 2 examination findings would result in a calculated speed of 1.5 seconds/finding.


Data were stored using an electronic spreadsheet program (Excel 2003; Microsoft). A 2-way mixed-effects analysis of variance was performed to characterize the variability in speed and accuracy of each interface. Because the 5 cases and 3 documentation strategies were all performed in the same order by each of the 20 subjects, evaluation of the learning effects within each strategy was performed using a univariate linear regression model. Analysis was performed using statistical software (SPSS software version 15.0; SPSS, Inc, Chicago, Illinois, USA). Statistical significance was considered when P < .05.




Results


Study Subjects and Cases


Twenty subjects from 5 ophthalmology training programs in New York City were recruited for this study. Among this group, 17 were residents (postgraduate years 2 through 4), whereas 3 were fellows (postgraduate years 5 or 6). Nineteen (95%) of the 20 subjects reported having taken premedical coursework that involved the use of computers, and all (100%) reported at least an intermediate proficiency with computers. All (100%) of the subjects had learned to document the ophthalmic examination using paper and diagrams and were using paper charting in their ophthalmology clinic. Fifteen (75%) reported that their hospitals were in the process of implementing an EHR for their ophthalmology clinics, but no subjects were currently using EHRs in their practices.


Each case averaged 51.6 findings (range, 48 to 56). There were a total of 258 ophthalmic examination findings identified in the 5 cases, with 300 case-strategy pairs, and 77 400 total possible findings documented by the subjects.


Documentation Accuracy


Sensitivity for documentation of examination findings was 89% (range, 80% to 98%) for the keyboard EHR strategy, 87% (range, 80% to 94%) for the mouse EHR strategy, and 89% (range, 82% to 98%) for the paper strategy. There were no statistically significant differences in sensitivity among these 3 strategies. PR for documentation was 99.4% for the keyboard strategy, 98.9% for the mouse strategy, and 99.9% for the paper strategy ( P < .001 between mouse EHR vs paper strategies, no statistically significant differences between keyboard EHR vs mouse EHR or keyboard EHR vs paper strategies).


Documentation Speed


With cases analyzed individually, the paper strategy was faster than both keyboard EHR and mouse EHR strategies in 4 of the 5 (80%) cases ( Table ). With all 5 cases analyzed together, mean documentation speed ± standard deviation was 2.4 ± 1.1 seconds for the keyboard strategy, 2.2 ± 0.7 seconds for the mouse EHR strategy, and 2.0 ± 0.8 seconds for the paper strategy. Documentation speed of the paper strategy was significantly greater than either of the EHR strategies ( P < .001 for paper vs keyboard EHR, P < .001 for paper vs mouse EHR).


Jan 9, 2017 | Posted by in OPHTHALMOLOGY | Comments Off on Accuracy and Speed of Electronic Health Record Versus Paper-Based Ophthalmic Documentation Strategies

Full access? Get Clinical Tree

Get Clinical Tree app for offline access