The Value of Tests in the Diagnosis and Management of Glaucoma




Purpose


To assess the noneconomic value of tests used in the diagnosis and management of glaucoma, and explore the contexts and factors that determine such value.


Design


Perspective.


Methods


Selected articles from primary and secondary sources were reviewed and interpreted in the context of the authors’ clinical and research experience, influenced by our perspectives on the tasks of reducing the global problem of irreversible blindness caused by glaucoma. The value of any test used in glaucoma is addressed by 3 questions regarding: its contexts, its kind of value, and its implicit or explicit benefits.


Results


Tonometry, slit-lamp gonioscopy, and optic disc evaluation remain the foundation of clinic-based case finding, whether in areas of more or less abundant resources. In resource-poor areas, there is urgency in identifying patients at risk for severe functional loss of vision; screening strategies have proven ineffective, and efforts are hindered by the inadequate allocation of support. In resource-abundant areas, the wider spectrum of glaucoma is addressed, with emphasis on early detection of structural changes of little functional consequence; these are increasingly the focus of new and expensive technologies whose clinical value has not been established in longitudinal and population-based studies. These contrasting realities in part reflect differences among the value ascribed, often implicitly, to the tests used in glaucoma.


Conclusions


The value of any test is determined by 3 aspects: its context of usage; its comparative worth and to whom its benefit accrues; and how we define historically what we are testing. These multiple factors should be considered in the elaboration of priorities for the development and application of tests in glaucoma.


This article explores the noneconomic value of glaucoma testing from the perspective of reducing the global problem of blindness caused by a preventable disease. The important enterprise of medical economists who define value as “health outcomes achieved per dollar spent” has never addressed the issue of glaucoma testing directly. For us the question of a test’s value revolves around 3 questions: In what context? What kind of value? For whom?


To address the value of different tests, we identify 4 major contexts for considering the disease of glaucoma. A fundamental context concerns whether we are addressing structural or functional matters. Spaeth has emphasized the distinction between the biological definition of “glaucoma”—the characteristics of optic nerve damage by which the disease process is structurally manifested—and the functional outcome : how glaucoma causes human affliction and disability. The scientific rigor required to discriminate pathophysiological mechanisms and demonstrate the effectiveness of interventions emphasizes the value of precise understanding; the urgency to arrest the scourge of glaucomatous blindness accentuates the value of relieving suffering.


The different emphases of these 2 values are reflected in a second major context: the manner in which socioeconomic resources for glaucoma care are globally mobilized, in terms of equipment and trained personnel, infrastructure, and the political will to provide health services. In resource-poor communities, preventing functional vision impairment is paramount; here the primary value of tests is for the efficient diagnosis of potentially blinding glaucoma. In resource-abundant settings, continuous care for the patients in various stages of the disease spectrum requires technologies whose value lies in early detection and sensitive monitoring of disease progression and response to therapy. It should be remembered that such loci of resource-abundant and resource-poor ophthalmic capacities do not necessarily correspond to outdated locutions of “first- and third-world” locations, but may be geographically congruent. Moreover, shifting economic realities over time may well affect the sustainability of resources in currently “abundant” communities.


The third context for understanding the value of testing in glaucoma lies in reconceptualizing the disease, and locating technology’s role among other key elements. In the introductory meeting of the first annual gathering of the World Glaucoma Association devoted to elaborating international consensus documents, Lee emphasized the importance of embracing standardized definitions, of elaborating clear disease-staging systems, and of devising technology to assist clinicians in improving care delivery . A perfect illustration of this holistic approach of conceptual reframing has been the revolutionary and simplified redefinition of terms for the primary angle-closure (PAC) disease spectrum and its 3 sequential clinical stages of suspect, closure, and glaucoma. These consensus efforts have facilitated invaluable comparative studies of the epidemiology of angle closure’s clinical presentations, while simultaneously providing a rational scaffold for researchers worldwide to assess the utility of various interventions at specific stages of the disease. This new system of terminology explicitly assigned primacy to slit-lamp gonioscopy as the technological “gold standard” of diagnosis, thus enabling research into the relative value of ancillary imaging technologies.


The fourth context for exploring the value of glaucoma tests is the context of time , in which innovative technology enables new visualization or quantification of disease-related phenomena so as to literally redefine the disease itself. As technology’s development accelerates, so too our conceptualization of the disease morphs, forcing us to reevaluate the tests used before—that is, reassess their value. This discussion of a “temporal context for value” is explored at the conclusion of our survey of the value of contemporary glaucoma tests, having first addressed the contexts of structure/function, of resource availability, and of locating technology’s role within a conceptual matrix of definitions, disease stages, and delivery of care.


Addressing Glaucoma in Resource-Poor Settings


Glaucoma is the world’s leading cause of preventable irreversible blindness, affecting an estimated 60.5 million persons and responsible for vision loss among 8.4 million in 2010. Damage to vision from glaucoma has been associated with a significant impact on activities of daily living, even at levels well before blindness. As pressure-lowering treatment for glaucoma has been demonstrated to reduce the rate of progression of vision damage, there is potential value in screening for the disease.


Primary Open-Angle Glaucoma


Population-based studies have consistently shown that traditional screening tests—Goldmann tonometry, automated visual fields, and evaluation of the disc—do not provide adequate positive predictive value for the efficient detection of open-angle glaucoma. There seems little promise from more recent population-based studies that new technologies—such as retinal tomography and frequency-doubling technology — will improve this situation soon. A recent Cochrane review could find no randomized controlled trials demonstrating the benefit of screening for open-angle glaucoma, and a single such trial for angle closure and angle-closure glaucoma has reported a negative result. Both meta-analyses in the developed world and focused studies in the developing world have concluded that glaucoma outreach screening on a population basis is not cost-effective.


Thus, clinic-based case finding appears to be the strategy most suited to reducing the global burden of vision morbidity attributable to glaucoma at the present time. It should be acknowledged that, as case detection of more severely affected persons is the appropriate goal in areas of limited resources, sensitivity and specificity of examinations, such as visual field testing, may be improved over what population-based studies suggest. Additional work is needed to measure or model the proportion of glaucoma blindness that might be averted by pursuing the limited strategy of targeting persons presenting with advanced glaucoma, as advocated here. It seems likely that a large proportion of blinding glaucoma will still go undetected with this approach.


Medical therapy for glaucoma involves potentially lifelong adherence to therapies that may be expensive, difficult to obtain in poor areas, and not always well tolerated. As such, pharmacologic treatment is rarely well suited for use in resource-poor areas. Alternatively, surgical therapy for glaucoma appears to be associated with increased prevalence of lens opacity and vision loss. Offering surgical therapy only to persons whose glaucoma is more immediately vision-threatening may reduce iatrogenic morbidity in settings where surgery is the only practical approach, and may also improve accuracy in case detection.


The specificity of visual field testing appears to be poor in identifying glaucoma among persons without previous field-taking experience, and perimeters are expensive compared to hand-held condensing lenses and direct ophthalmoscopes used to evaluate the optic nerve. For these reasons, assessment of the optic nerve is currently the key examination in identifying potentially blinding glaucoma in resource-poor areas.


Agreement among ophthalmologists in distinguishing normal optic nerves from those with varying degrees of glaucoma appears to be moderate, and may or may not exceed the diagnostic accuracy of quantitative imaging devices. From a cost perspective, assessment of the nerve by subjective ophthalmoscopy is clearly preferable for areas with limited resources. Performance is likely to be higher in detecting advanced (and therefore more immediately potentially blinding) glaucoma, as opposed to disease of lesser severity. Various strategies might be employed to increase the accuracy of disc examination in glaucoma case finding, including the use of photographic standards for comparison, photographic documentation for on-site or remote evaluation (where practical), and 2-tier screening, with confirmation of suspected cases by more highly trained personnel.


It is clear nonetheless that training in the recognition of the glaucomatous optic nerve is a critical prerequisite to such screening. Few studies have examined the impact of training on accuracy in optic nerve assessment, and fewer still in areas of limited resources. Studies to validate the efficacy and cost-effectiveness of training strategies to promote recognition of potentially blinding glaucomatous optic neuropathy on the basis of subjective examination of the nerve are a critical prerequisite for enhanced clinic-based case finding in resource-poor areas.


Population-based studies have suggested that assessment of intraocular pressure (IOP) by itself performs poorly in identifying prevalent glaucoma cases. There is little evidence to suggest that isolated pressure testing would be more useful in case finding of more severe glaucoma. However, measurement of IOP in combination with assessment of the optic nerve might identify persons at sufficient risk of severe lifetime damage to justify intervention, even in cases where optic nerve pathology is not yet visible. Under the presumption that a slit lamp is required for optimal assessment of the optic nerve and anterior chamber angle, the additional cost of a Goldmann applanation tip is modest. Training requirements for Goldmann applanation tonometry (GAT) have not been well studied; while they are likely greater than for noncontact applanation, Tonopen, and other widely available devices, the low cost and simplicity of GAT make it preferable.


Primary Angle-Closure Glaucoma


Primary angle-closure glaucoma (PACG) appears to be associated with a 3- to 4-fold increased risk of blindness compared to open-angle glaucoma. Peripheral iridectomy (PI), whether by incisional or laser techniques, has been proven effective in preventing recurrent attacks among eyes with acute angle closure and fellow eyes in such cases. The effectiveness of PI in the short- and long-term reduction of intraocular pressure in nonacute PACG has also been a relatively consistent finding in studies of Asian and European eyes with narrow angles only, and without substantial peripheral anterior synechiae or optic nerve damage. Trials to address the impact of PI for narrow angles alone on the development of synechiae and glaucoma are ongoing; clinical practice in many areas is in favor of treating such patients. Although screening paradigms for PACG continue to be evaluated, stage-specific interventions, such as the apparent advantage of phacoemulsification in eyes with PAC and significant cataract, and the recent recognition of the high heritability of narrow angles and PACG among Chinese siblings, continue to enhance screening and therapeutic strategies addressing this devastating form of glaucoma.


For all of these reasons, routine assessment of the anterior chamber angle with gonioscopy, or routine slit-beam assessment of the peripheral angle followed by targeted gonioscopy, would appear to be part of the case-finding process in areas of limited resources, particularly those with high prevalence of angle closure. Simpler tests, such as oblique illumination of the eye, do not appear to provide sufficient diagnostic accuracy to be useful as stand-alone tests. Slit-beam assessment of the peripheral angle has been suggested as an alternative to gonioscopy, with acceptable (though somewhat reduced) accuracy. Requiring as it does a slit lamp, it offers few advantages from a resource standpoint over gonioscopy, though it is quicker, is less invasive, and may have fewer requirements for training. Recent studies have suggested that newer technologies may not detect narrow angles as accurately as gonioscopy, in large part because of poor specificity. In any case, these devices are not practical for use in this context.


Reliance on gonioscopy as the key technology for assessment of the anterior chamber angle in resource-poor areas presupposes a significant investment in training. Little research exists to indicate how much training may be required to bring examiners to a level of competence, or what training strategies are most efficient. One report suggests that use of a slit lamp–mounted reticule, facilitating direct measurement of angle structures through the goniolens, may improve agreement between trainees and experienced observers compared to conventional subjective angle grading.


In summary, the following modalities are recommended for detection of potentially blinding glaucoma in resource-poor areas: assessment of the optic nerve with a slit lamp and condensing lens, measurement of IOP by GAT, and gonioscopic evaluation of the anterior chamber angle. In cases where a slit lamp is already available (a tenable assumption for many rural facilities in Asia and Latin America, though not necessarily Africa), the cost of a full set of screening tools for glaucoma would be several hundred US dollars. In resource-poor settings, assuming that surgical treatment would be offered to patients with sight-threatening disease and treatment is not routinely offered for early glaucoma and ocular hypertension with low or modest risk of visual disability, the complex issue of assessing glaucoma progression may be moot. In such settings, assessment of the visual field and photographic or other modalities to monitor changes in optic nerve appearance would not be necessary.




Addressing Glaucoma in Resource-Abundant Settings


The basic technologies essential for diagnosis of glaucoma in resource-poor circumstances are, in resource-abundant areas, universal and familiar. But they are available with so many variations and alleged enhancements that confusion often surrounds what is genuinely essential for glaucoma care. The fundamentals remain simple: IOP is the only physiological parameter we can currently alter to interrupt the progression of glaucoma; the optic nerve (and surrounding nerve fiber layer) is our primary objective focus to discern and monitor structural manifestations of the disease; and perimetry is our primary means of assessing the disease’s impact on a person’s perceptual function and, ultimately, quality of life (QOL). In the context of abundant resources, clinicians need both to detect the presence of disease and to monitor its progression and response to pressure-lowering interventions.


Interestingly, each of these triadic pillars of glaucoma assessment—tonometry, disc evaluation, and perimetry—demonstrates strengths and weaknesses, which color the value of the testing itself. For each diagnostic test there is evaluative literature based on scientific criteria, exploring the test’s reliability, validity, and reproducibility. And yet clinically, “imperfect” testing devices are the key both to diagnostic case finding and to monitoring the success of interventions against disease. This interplay of “what is ideal” and “what works” resonates with the tension described earlier between “scientific rigor” and “urgency” in delivering real-world care.


Tonometry


Goldmann applanation tonometry is not as manometrically accurate as the pneumatonometer for IOP measurement, is subject to interobserver measurement variability of approximately 2.5 mm Hg, and is significantly affected by the central corneal thickness. It nevertheless has served as the standard for IOP measurement in every major clinical trial for the past 50 years. When available, pachymeters that measure the central corneal thickness (CCT) enhance the value of applanation tonometry. Because precise nomograms to adjust measured IOP according to CCT have never been convincingly validated, one knowledgeable expert advises clinicians to simply categorize CCT measurements as “thin, average, or thick” and approximate the probable IOP range.


Alternative tonometers— such as the dynamic contour or noncontact air devices, Tonopen, and iCare rebound tonometers—have yet to be evaluated in large-scale studies for their cost-effectiveness and durability, and their advantages over Goldmann tonometry have not been convincingly demonstrated.


It is important to remember that the IOP is a physiological variable with significant intra-individual fluctuation, which we ascertain on the smallest of time scales: applanation readings taken in each eye every 3 months, at 5 seconds per eye, total to less than 1 minute of measurement per year. In the absence of reliable 24-hour telemetric monitoring devices, we have no information as to how representative such sampling is.


The value of any tonometer’s “accuracy” must also be considered within the larger context of IOP’s complex role in glaucoma. Measured IOP depends not only on instrumentation, but also on many other variables, known and unknown: for example, the patient’s posture sitting or supine; orbital size and lid pressures on the globe; and diurnal and pharmacologic effects. Similarly, the impact of IOP on glaucoma pathophysiology seemingly involves complex interactions among various pressure-based systems, such as cardiac and peripheral vascular pressures and translaminar and intracranial pressures. Yet another aspect of IOP’s imperfect accuracy in clinical practice is reflected in the broad range of definitions used in the literature for defining both “ocular hypertension” and “successful IOP outcomes” following trabeculectomy surgery. And despite near-consensus among glaucomatologists for the key role of IOP as a risk factor for both the development and progression of POAG, there is a valuable contrarian review of clinical glaucoma trials that concludes there is only a poor correlation between IOP levels and progressive vision loss.


So how can we adjudicate the value of competing tonometer technologies, whose differences in accuracy or independence from corneal factors such as thickness or hysteresis may result in variances of only a few mm Hg? Just as the consensual reframing of PACG disease led to a uniformity of terms, which in turn significantly enhanced comparability among studies, a similar recontextualization that explicitly addresses IOP’s variabilities may be productive. Perhaps consensus can be reached regarding standards of IOP reporting: by range of readings? By maximum IOP? By averaging a set number of readings? By endorsing standard devices and measurement protocols? Such conceptual clarification might well provide a fresh perspective for considering the value of tonometers, old or new.


Slit Lamp, Gonioscopy, and Angle Imaging


Although the slit lamp’s ubiquitous versatility is often taken for granted, it remains of incontrovertible value for basic and advanced glaucoma care in all settings. Its binocular platform allows rapid gonioscopy to distinguish open-angle from angle-closure disease, as well as the identification of secondary causes of the glaucomas, such as trauma or neovascularization. It permits reliable applanation tonometry, and it facilitates stereoscopic visualization of the disc and retina with condensing lenses (eg, 60-diopter or 90-diopter).


Outside of areas where a high incidence of angle-closure disease requires regular practice, clinicians in resource-adequate communities may be uncomfortable with their gonioscopic skills, and seek technological alternatives to validate, or substitute for, subjective gonioscopy. Although new angle-imaging technologies, such as ultrasonic biomicroscopy (UBM) and anterior-segment optical coherence tomography (AS-OCT), are becoming more widely available in university and subspecialty practices in resource-abundant settings, they are not yet routinely used. Beyond their use in research, however, these technologies currently best serve as adjunctive devices for the elucidation of pathogenic mechanisms—such as why an angle remains narrow following a patent laser iridotomy—and not as substitutes for gonioscopy. They neither accurately identify eyes at risk for angle closure for screening purposes nor demonstrate reliable agreement in quantifying angle structures and relationships.


Another current limitation of angle-imaging devices is that they provide only a few cross-sectional images of the angle—1 slice with each UBM image, or simultaneous 180-degree sections of opposite sides of the angle with AS-OCT. Thus neither instrument at present provides the 360-degree view of the angle or the dynamic information about iris-trabecular contact revealed by indentation (“compression”) gonioscopy at the slit lamp. Slit-lamp gonioscopy currently remains the technical gold standard for evaluation of the angle in both clinical and academic research settings.


Optic Nerve and Retinal Nerve Fiber Layer Imaging


In the past several years there has been a proliferation of optic nerve and retinal nerve fiber layer imaging devices: confocal scanning laser tomography (Heidelberg Retinal Tomograph [HRT]; Heidelberg Engineering GmbH, Heidelberg, Germany); 2 types of instruments for OCT (an initial incarnation using a time-domain instrumentation and more recent versions using a higher-resolution spectral-domain algorithm [SD-OCT], also known as Fourier-domain); and a device (GDx; Carl Zeiss Meditec Inc, Dublin, California) that measures only the retinal nerve fiber layer (RNFL) using laser polarimetry. The implicit expectation for these instruments’ value has been that high-resolution imaging of the optic disc and its surrounding RNFL will objectively identify glaucomatous damage and progression earlier and more accurately than the clinician observer.


The optic nerve imaging literature, describing as it does a new and rapidly changing technology, is—not surprisingly—replete with unresolved issues: the detection of structural changes with inconsistent functional correlations; difficulty in extracting meaningful data from anomalously shaped or sized discs, as seen in highly myopic individuals; and lack of agreement among different machines using the same SD-OCT technology, as for example when measuring the RNFL thickness. In the past 2 decades the utility of optic nerve imaging in screening programs to detect POAG has fallen far short of expectations.


Current devices show their greatest promise in eyes with significant risk factors or with known POAG, whose structural alterations may herald visual field (VF) progression. The positive predictive value of any test for glaucoma will of course be higher when the test is applied to persons at higher risk of disease. Thus, appropriate clinical acumen in the application of testing will naturally increase test usefulness. Sophisticated arguments have been made that combinations of tests may improve the performance of any single test in isolation, with the identification of clinically relevant changes after integrating diverse measures (such as computerized perimetry data with HRT or OCT imaging)—but often these efforts show only equivocal sensitivities and specificities. The few population-based attempts to improve on the detection of glaucoma using this integrative strategy have not been very successful. The invaluable addition of independently validated, longitudinal databases, comparing a single patient’s data to large tested populations of normal or glaucomatous eyes, is not yet available.


For many glaucoma specialists these instruments are of adjunctive rather than essential value in following patients. In our own practices, we commonly obtain optic nerve imaging for the measurement of the disc size (HRT or OCT) and/or baseline evaluation of the nerve fiber layer (OCT or GDx) at the time of the initial optic nerve evaluation, especially in the presence of an anomalously sized or shaped disc. In the absence of suggestive VF change, only rarely do we order follow-up examinations with any of these devices. Instead we perform careful 78-diopter or 90-diopter stereo evaluation of the disc at the slit lamp, with documentation of the nerve’s appearance, at each visit. We rely on IOP control, clinical disc assessments, and computerized visual fields as our primary measures of clinical stability.


The equivocal value of optic nerve imaging for following glaucoma patients is in stark contrast to the incontrovertible OCT contribution to managing retinal disease: imaging presents a clear and rapidly available representation of the microscopic structural disarray, such as cystoid macular edema, manifesting over a relatively brief time frame, and whose structural response to therapy can be functionally correlated. In contrast, the subtle, long-term structural optic nerve head/RNFL alterations in glaucoma detected by the newest generations of imaging devices are of uncertain functional and clinical significance. Perhaps with future refinements and research this situation will improve—although the rapid pace of computerized innovation may preclude tracking a chronic, slowly progressive disease with a consistently available and stable technology.


Digital stereo-disc photography remains an invaluable technology across a wide range of resource capacities, whose full potential is often neglected. Compared to refinements in nerve imaging devices, digital capture of optic nerve images has elicited comparably little investment, with several excellent digital cameras no longer commercially available.


Despite reports of variable agreement among expert and generalist clinicians in distinguishing normal from glaucomatous discs by photographs, stereo disc photography has been the gold standard against which imaging devices are compared. In 4 major clinical trials investigating ocular hypertension and glaucoma—the Ocular Hypertension Treatment Study, European Glaucoma Preventions Study, Collaborative Normal-Tension Glaucoma Study, and Early Manifest Glaucoma Treatment Study — photographic evidence of disc change was used as the primary endpoint, in conjunction with perimetric loss.


Precise image alignment technology now allows for digital photographs of the optic disc, taken at separate times, to be rapidly alternated in a “flicker” presentation, so as to highlight subtle shifts of vessel position, transient small disc hemorrhages, or rim changes with progressive cupping. With the ever-decreasing cost of high-resolution digital cameras and widespread use of computerized image displays, there appears to be an opportunity for government-supported or niche-industry initiatives to refine and market simple, high-quality disc camera and flicker-based image-analysis programs as an affordable alternative to expensive and rapidly changing imaging instruments.


Visual Fields


The evolution from manual, kinetic Goldmann and Tübingen perimetry a generation ago into computerized standard automated perimetry (SAP) has made an essential contribution to glaucoma diagnosis and progression monitoring. SAP’s innovations (eg, the pattern deviation map, mean deviation measure, and glaucoma hemifield test with the Humphrey-Zeiss Field Analyzer (Carl Zeiss Meditec, Inc) ) have been used in nearly all major clinical trials. Changes in SAP specifically served as the major endpoint for both the Advanced Glaucoma Intervention Study and the Collaborative Initial Glaucoma Treatment Study.


Although the value of SAP as a fundamental tool to diagnose and monitor glaucoma cannot be underestimated, so too its limitations are well recognized. These tests are difficult for many patients to consistently and reliably perform, especially the very young, old, and infirm.


Technological innovation in perimetry has concentrated on 3 major areas: 1) simpler instruments for screening, such as the frequency-doubling threshold device (eg, FDT or Humphrey Matrix (Carl Zeiss Meditec, Inc)); 2) instrumentation (eg, short-wave automated perimetry [SWAP]) to detect earlier signs of glaucomatous damage than found with SAP; and 3) interpretation software to analyze data from multiple visual fields, so as to distinguish variability from progression (eg, the Glaucoma Progression Analysis or Visual Field Index on the Humphrey devices, or the stand-alone PROGRESSOR software (Medisoft, Leeds, United Kingdom) ).


Much of perimetric development has concentrated on detecting early glaucomatous changes, either by means of rapid screening devices such as the FDT, or by more demanding tests installed on high-end automated perimeters that screen for blue-yellow perceptual defects as signs of disease appearing prior to changes with SAP. This led to the development of SWAP testing for both the Humphrey and Octopus perimetry platforms. Despite initial reports heralding SWAP’s greater sensitivity in detecting early change prior to SAP, recent studies suggest there may be no difference between SWAP and SAP in identifying early conversion to perimetrically proven glaucoma. At the other end of the glaucoma spectrum from early disease is end-stage VF loss; computerized perimetry is less flexible than kinetic Goldmann testing for monitoring such advanced perimetric damage. However, if a small central visual field remains, testing with a smaller 10-degree test pattern, using either standard size-III or larger size-V stimulus sizes, can be useful in monitoring.


Interpretation of a patient’s visual fields over time remains burdensome for many clinicians, even when assisted by software innovations. One common experience in SAP testing is the phenomenon whereby abrupt glaucoma-like worsening on a visual field will appear in the absence of clinical changes in IOP control or disc appearance. To place inconsistent test results in context and to establish a reliable baseline, it has been recommended that all patients with manifest glaucomatous VF loss ideally undergo 6 SAP tests in the first 2 years after diagnosis, the better to identify those at risk for rapid progression. Similarly, other researchers have suggested that in the event of sudden apparent VF worsening, a series of 3 tests should be performed for confirmation. These guidelines have been derived from expert studies of the characteristics of SAP testing itself; but they place unreasonable demands on testing time for patient and medical staff, and are unlikely, in the United States, to be reimbursed by insurance. Nevertheless, these suggested standards can be adjusted for practicality: obtain at least 2 tests per year in any eye with manifest field loss to enhance early detection of rapid progression; and always confirm any abrupt field change with a second test. This illustrates the healthy compromise between rigorously established “evidential value” and the “clinical value” of testing the glaucoma patients’ visual fields in clinical practice.




Addressing Glaucoma in Resource-Abundant Settings


The basic technologies essential for diagnosis of glaucoma in resource-poor circumstances are, in resource-abundant areas, universal and familiar. But they are available with so many variations and alleged enhancements that confusion often surrounds what is genuinely essential for glaucoma care. The fundamentals remain simple: IOP is the only physiological parameter we can currently alter to interrupt the progression of glaucoma; the optic nerve (and surrounding nerve fiber layer) is our primary objective focus to discern and monitor structural manifestations of the disease; and perimetry is our primary means of assessing the disease’s impact on a person’s perceptual function and, ultimately, quality of life (QOL). In the context of abundant resources, clinicians need both to detect the presence of disease and to monitor its progression and response to pressure-lowering interventions.


Interestingly, each of these triadic pillars of glaucoma assessment—tonometry, disc evaluation, and perimetry—demonstrates strengths and weaknesses, which color the value of the testing itself. For each diagnostic test there is evaluative literature based on scientific criteria, exploring the test’s reliability, validity, and reproducibility. And yet clinically, “imperfect” testing devices are the key both to diagnostic case finding and to monitoring the success of interventions against disease. This interplay of “what is ideal” and “what works” resonates with the tension described earlier between “scientific rigor” and “urgency” in delivering real-world care.


Tonometry


Goldmann applanation tonometry is not as manometrically accurate as the pneumatonometer for IOP measurement, is subject to interobserver measurement variability of approximately 2.5 mm Hg, and is significantly affected by the central corneal thickness. It nevertheless has served as the standard for IOP measurement in every major clinical trial for the past 50 years. When available, pachymeters that measure the central corneal thickness (CCT) enhance the value of applanation tonometry. Because precise nomograms to adjust measured IOP according to CCT have never been convincingly validated, one knowledgeable expert advises clinicians to simply categorize CCT measurements as “thin, average, or thick” and approximate the probable IOP range.


Alternative tonometers— such as the dynamic contour or noncontact air devices, Tonopen, and iCare rebound tonometers—have yet to be evaluated in large-scale studies for their cost-effectiveness and durability, and their advantages over Goldmann tonometry have not been convincingly demonstrated.


It is important to remember that the IOP is a physiological variable with significant intra-individual fluctuation, which we ascertain on the smallest of time scales: applanation readings taken in each eye every 3 months, at 5 seconds per eye, total to less than 1 minute of measurement per year. In the absence of reliable 24-hour telemetric monitoring devices, we have no information as to how representative such sampling is.


The value of any tonometer’s “accuracy” must also be considered within the larger context of IOP’s complex role in glaucoma. Measured IOP depends not only on instrumentation, but also on many other variables, known and unknown: for example, the patient’s posture sitting or supine; orbital size and lid pressures on the globe; and diurnal and pharmacologic effects. Similarly, the impact of IOP on glaucoma pathophysiology seemingly involves complex interactions among various pressure-based systems, such as cardiac and peripheral vascular pressures and translaminar and intracranial pressures. Yet another aspect of IOP’s imperfect accuracy in clinical practice is reflected in the broad range of definitions used in the literature for defining both “ocular hypertension” and “successful IOP outcomes” following trabeculectomy surgery. And despite near-consensus among glaucomatologists for the key role of IOP as a risk factor for both the development and progression of POAG, there is a valuable contrarian review of clinical glaucoma trials that concludes there is only a poor correlation between IOP levels and progressive vision loss.


So how can we adjudicate the value of competing tonometer technologies, whose differences in accuracy or independence from corneal factors such as thickness or hysteresis may result in variances of only a few mm Hg? Just as the consensual reframing of PACG disease led to a uniformity of terms, which in turn significantly enhanced comparability among studies, a similar recontextualization that explicitly addresses IOP’s variabilities may be productive. Perhaps consensus can be reached regarding standards of IOP reporting: by range of readings? By maximum IOP? By averaging a set number of readings? By endorsing standard devices and measurement protocols? Such conceptual clarification might well provide a fresh perspective for considering the value of tonometers, old or new.


Slit Lamp, Gonioscopy, and Angle Imaging


Although the slit lamp’s ubiquitous versatility is often taken for granted, it remains of incontrovertible value for basic and advanced glaucoma care in all settings. Its binocular platform allows rapid gonioscopy to distinguish open-angle from angle-closure disease, as well as the identification of secondary causes of the glaucomas, such as trauma or neovascularization. It permits reliable applanation tonometry, and it facilitates stereoscopic visualization of the disc and retina with condensing lenses (eg, 60-diopter or 90-diopter).


Outside of areas where a high incidence of angle-closure disease requires regular practice, clinicians in resource-adequate communities may be uncomfortable with their gonioscopic skills, and seek technological alternatives to validate, or substitute for, subjective gonioscopy. Although new angle-imaging technologies, such as ultrasonic biomicroscopy (UBM) and anterior-segment optical coherence tomography (AS-OCT), are becoming more widely available in university and subspecialty practices in resource-abundant settings, they are not yet routinely used. Beyond their use in research, however, these technologies currently best serve as adjunctive devices for the elucidation of pathogenic mechanisms—such as why an angle remains narrow following a patent laser iridotomy—and not as substitutes for gonioscopy. They neither accurately identify eyes at risk for angle closure for screening purposes nor demonstrate reliable agreement in quantifying angle structures and relationships.


Another current limitation of angle-imaging devices is that they provide only a few cross-sectional images of the angle—1 slice with each UBM image, or simultaneous 180-degree sections of opposite sides of the angle with AS-OCT. Thus neither instrument at present provides the 360-degree view of the angle or the dynamic information about iris-trabecular contact revealed by indentation (“compression”) gonioscopy at the slit lamp. Slit-lamp gonioscopy currently remains the technical gold standard for evaluation of the angle in both clinical and academic research settings.


Optic Nerve and Retinal Nerve Fiber Layer Imaging


In the past several years there has been a proliferation of optic nerve and retinal nerve fiber layer imaging devices: confocal scanning laser tomography (Heidelberg Retinal Tomograph [HRT]; Heidelberg Engineering GmbH, Heidelberg, Germany); 2 types of instruments for OCT (an initial incarnation using a time-domain instrumentation and more recent versions using a higher-resolution spectral-domain algorithm [SD-OCT], also known as Fourier-domain); and a device (GDx; Carl Zeiss Meditec Inc, Dublin, California) that measures only the retinal nerve fiber layer (RNFL) using laser polarimetry. The implicit expectation for these instruments’ value has been that high-resolution imaging of the optic disc and its surrounding RNFL will objectively identify glaucomatous damage and progression earlier and more accurately than the clinician observer.


The optic nerve imaging literature, describing as it does a new and rapidly changing technology, is—not surprisingly—replete with unresolved issues: the detection of structural changes with inconsistent functional correlations; difficulty in extracting meaningful data from anomalously shaped or sized discs, as seen in highly myopic individuals; and lack of agreement among different machines using the same SD-OCT technology, as for example when measuring the RNFL thickness. In the past 2 decades the utility of optic nerve imaging in screening programs to detect POAG has fallen far short of expectations.


Current devices show their greatest promise in eyes with significant risk factors or with known POAG, whose structural alterations may herald visual field (VF) progression. The positive predictive value of any test for glaucoma will of course be higher when the test is applied to persons at higher risk of disease. Thus, appropriate clinical acumen in the application of testing will naturally increase test usefulness. Sophisticated arguments have been made that combinations of tests may improve the performance of any single test in isolation, with the identification of clinically relevant changes after integrating diverse measures (such as computerized perimetry data with HRT or OCT imaging)—but often these efforts show only equivocal sensitivities and specificities. The few population-based attempts to improve on the detection of glaucoma using this integrative strategy have not been very successful. The invaluable addition of independently validated, longitudinal databases, comparing a single patient’s data to large tested populations of normal or glaucomatous eyes, is not yet available.


For many glaucoma specialists these instruments are of adjunctive rather than essential value in following patients. In our own practices, we commonly obtain optic nerve imaging for the measurement of the disc size (HRT or OCT) and/or baseline evaluation of the nerve fiber layer (OCT or GDx) at the time of the initial optic nerve evaluation, especially in the presence of an anomalously sized or shaped disc. In the absence of suggestive VF change, only rarely do we order follow-up examinations with any of these devices. Instead we perform careful 78-diopter or 90-diopter stereo evaluation of the disc at the slit lamp, with documentation of the nerve’s appearance, at each visit. We rely on IOP control, clinical disc assessments, and computerized visual fields as our primary measures of clinical stability.


The equivocal value of optic nerve imaging for following glaucoma patients is in stark contrast to the incontrovertible OCT contribution to managing retinal disease: imaging presents a clear and rapidly available representation of the microscopic structural disarray, such as cystoid macular edema, manifesting over a relatively brief time frame, and whose structural response to therapy can be functionally correlated. In contrast, the subtle, long-term structural optic nerve head/RNFL alterations in glaucoma detected by the newest generations of imaging devices are of uncertain functional and clinical significance. Perhaps with future refinements and research this situation will improve—although the rapid pace of computerized innovation may preclude tracking a chronic, slowly progressive disease with a consistently available and stable technology.


Digital stereo-disc photography remains an invaluable technology across a wide range of resource capacities, whose full potential is often neglected. Compared to refinements in nerve imaging devices, digital capture of optic nerve images has elicited comparably little investment, with several excellent digital cameras no longer commercially available.


Despite reports of variable agreement among expert and generalist clinicians in distinguishing normal from glaucomatous discs by photographs, stereo disc photography has been the gold standard against which imaging devices are compared. In 4 major clinical trials investigating ocular hypertension and glaucoma—the Ocular Hypertension Treatment Study, European Glaucoma Preventions Study, Collaborative Normal-Tension Glaucoma Study, and Early Manifest Glaucoma Treatment Study — photographic evidence of disc change was used as the primary endpoint, in conjunction with perimetric loss.


Precise image alignment technology now allows for digital photographs of the optic disc, taken at separate times, to be rapidly alternated in a “flicker” presentation, so as to highlight subtle shifts of vessel position, transient small disc hemorrhages, or rim changes with progressive cupping. With the ever-decreasing cost of high-resolution digital cameras and widespread use of computerized image displays, there appears to be an opportunity for government-supported or niche-industry initiatives to refine and market simple, high-quality disc camera and flicker-based image-analysis programs as an affordable alternative to expensive and rapidly changing imaging instruments.


Visual Fields


The evolution from manual, kinetic Goldmann and Tübingen perimetry a generation ago into computerized standard automated perimetry (SAP) has made an essential contribution to glaucoma diagnosis and progression monitoring. SAP’s innovations (eg, the pattern deviation map, mean deviation measure, and glaucoma hemifield test with the Humphrey-Zeiss Field Analyzer (Carl Zeiss Meditec, Inc) ) have been used in nearly all major clinical trials. Changes in SAP specifically served as the major endpoint for both the Advanced Glaucoma Intervention Study and the Collaborative Initial Glaucoma Treatment Study.


Although the value of SAP as a fundamental tool to diagnose and monitor glaucoma cannot be underestimated, so too its limitations are well recognized. These tests are difficult for many patients to consistently and reliably perform, especially the very young, old, and infirm.


Technological innovation in perimetry has concentrated on 3 major areas: 1) simpler instruments for screening, such as the frequency-doubling threshold device (eg, FDT or Humphrey Matrix (Carl Zeiss Meditec, Inc)); 2) instrumentation (eg, short-wave automated perimetry [SWAP]) to detect earlier signs of glaucomatous damage than found with SAP; and 3) interpretation software to analyze data from multiple visual fields, so as to distinguish variability from progression (eg, the Glaucoma Progression Analysis or Visual Field Index on the Humphrey devices, or the stand-alone PROGRESSOR software (Medisoft, Leeds, United Kingdom) ).


Much of perimetric development has concentrated on detecting early glaucomatous changes, either by means of rapid screening devices such as the FDT, or by more demanding tests installed on high-end automated perimeters that screen for blue-yellow perceptual defects as signs of disease appearing prior to changes with SAP. This led to the development of SWAP testing for both the Humphrey and Octopus perimetry platforms. Despite initial reports heralding SWAP’s greater sensitivity in detecting early change prior to SAP, recent studies suggest there may be no difference between SWAP and SAP in identifying early conversion to perimetrically proven glaucoma. At the other end of the glaucoma spectrum from early disease is end-stage VF loss; computerized perimetry is less flexible than kinetic Goldmann testing for monitoring such advanced perimetric damage. However, if a small central visual field remains, testing with a smaller 10-degree test pattern, using either standard size-III or larger size-V stimulus sizes, can be useful in monitoring.


Interpretation of a patient’s visual fields over time remains burdensome for many clinicians, even when assisted by software innovations. One common experience in SAP testing is the phenomenon whereby abrupt glaucoma-like worsening on a visual field will appear in the absence of clinical changes in IOP control or disc appearance. To place inconsistent test results in context and to establish a reliable baseline, it has been recommended that all patients with manifest glaucomatous VF loss ideally undergo 6 SAP tests in the first 2 years after diagnosis, the better to identify those at risk for rapid progression. Similarly, other researchers have suggested that in the event of sudden apparent VF worsening, a series of 3 tests should be performed for confirmation. These guidelines have been derived from expert studies of the characteristics of SAP testing itself; but they place unreasonable demands on testing time for patient and medical staff, and are unlikely, in the United States, to be reimbursed by insurance. Nevertheless, these suggested standards can be adjusted for practicality: obtain at least 2 tests per year in any eye with manifest field loss to enhance early detection of rapid progression; and always confirm any abrupt field change with a second test. This illustrates the healthy compromise between rigorously established “evidential value” and the “clinical value” of testing the glaucoma patients’ visual fields in clinical practice.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jan 16, 2017 | Posted by in OPHTHALMOLOGY | Comments Off on The Value of Tests in the Diagnosis and Management of Glaucoma

Full access? Get Clinical Tree

Get Clinical Tree app for offline access