Clinical Measures of Visual Performance





There is a plethora of vision tests available, including both tests designed for general clinical use and those specific to low vision. Even within the ‘low vision’ category the tests are not universally applicable and no single test can fulfil all requirements, although some tests have more than one use. The strengths and weaknesses of the particular test need to be acknowledged and understood, and tests should be selected carefully after considering why visual performance is being measured.


Why do we test visual performance?


For what purposes might visual performance be measured in a patient with visual impairment?



  • 1.

    For the early detection, or to monitor progression, of an ocular disorder. Usually the earlier that diagnosis occurs, the more likely that any treatment that is available will be successful. This could mean that the disorder could be treated before it causes permanent impairment; that is, before it leads to ‘low vision’. There is increasing interest, particularly in tests which monitor progression, that these could be self-administered by the patient at home. Delivering such tests on a personal electronic device (tablet or smartphone) makes this more feasible but brings a new set of challenges in calibration of screen luminance, viewing distance, room illumination, and in digital exclusion.


  • 2.

    To compare with ‘normal’ performance, or with an accepted standard; for example, the Department for Transport test for drivers where a car number plate of standard size must be read at a fixed distance. When defining the category of vision impairment (VI) (e.g. when using the International Classification of Diseases 11th Revision ( ICD-11 ) or when completing a Certificate of Vision Impairment (CVI) form (see Chapter 1 ), or classifying for disabled sport (see Chapter 15 ) it is important to have simple, internationally recognised visual acuity (VA) and visual field tests.


  • 3.

    To measure improvement and decline in performance with specific aids and devices; this might be between consecutive visits, with and without spectacle lenses, or with two different types of magnifier.


  • 4.

    To quantify the patient’s own subjective impression of their visual performance in everyday circumstances. A very simple example might be a test of VA in the presence of a bright light to try to measure just how significant are the patient’s complaints of poor vision on a sunny day.


  • 5.

    To predict the outcome of a medical or surgical treatment, or a rehabilitation programme. There are always some risks with surgery, no matter how commonplace, and rehabilitation can have enormous costs (financial, and in the time involved). It is therefore sensible for the patient and practitioner to have as much information as possible in order to decide whether the potential benefits outweigh the risks, and to allow expectations to be managed.


  • 6.

    Predicting visual function for everyday tasks. This is the most important consideration for low-vision rehabilitation. Although patients are usually well aware of the activity limitations caused by their impairment, it is often necessary to quantify this. Unfortunately, there may be no direct correlation between functional performance and standard clinical tests: a patient may achieve a high score on an acuity test and yet perform poorly on a practical task like reading, or a patient may be able to navigate alone in a busy street despite a very constricted visual field. This was discussed in a feature issue of Ophthalmic and Physiological Optics ( ). It is important for the patient that the low-vision practitioner should be able to interpret their performance in terms such as ‘can cross a road safely’ rather than ‘can identify the 1.0 logMAR line on a chart’. It is difficult to devise simple clinical tests which relate directly to task performance, not least because there are many different visual and nonvisual skills interacting. Success in crossing the road, for example, may combine the extent of visual field, movement and contrast perception, hearing ability, experience, confidence and the amount of training received. As this problem has proved so difficult, there has been a shift to asking patients to describe their own functional difficulties in a structured and quantifiable way. These functional measures of performance (Patient Reported Outcome Measures or PROMs) are discussed in Chapter 4 .



Assessing the performance of vision tests


In most of the situations described, if the test is to be useful, it must produce results which are both repeatable (reproducible) and sensitive. To put this in a clinical context, if the VA of a patient was measured twice in quick succession, the result produced should be identical on both occasions (the test is repeatable), even if the test was carried out by two different practitioners in two different settings (the test is reproducible). If the patient received some treatment in between these two measurement sessions which did improve their ocular condition, it would be expected that this would be reflected in a noticeable difference in the two measurements (the test is sensitive).


If repeated use of the same test is required, especially over a short time period, then the test must also be available in multiple versions which have been shown to be equivalent to each other, and this is not a trivial problem. Despite the care taken by the developers, it may only be when extensive data are collected by later researchers that differences between versions are revealed.


Vision tests designed to be displayed on computer screens have become very popular in recent years. They have the advantage that the controlling software (and therefore the entire test design) can be changed without needing new hardware. This means that ‘improvements’ can be made to the test almost continuously, but also means that the version currently in use may be different from that which was included in a validation study. The use of electronic displays seems to solve the technical difficulties of having a large number of versions of a particular test, but this masks the fact that these versions are unlikely to have been rigorously tested. For some tests, insufficient attention is given to the effect of screen resolution (pixel density) and luminance variations; in particular electronic screens may not be able to measure contrast sensitivity (CS) ( ).


It would be unusual for a completely novel test of visual performance to be introduced, but if a new format for an established measure is developed (e.g. a new VA chart) it is necessary to validate such a test by comparison with a ‘gold standard’ measurement technique that has already been shown to have the required repeatability and sensitivity. To evaluate this agreement, the mean difference (or bias) and the limits of agreement (LoA) are required. It should be decided in advance of the study what results would be clinically acceptable: for example, a difference in VA of ‘two letters’ between two letter charts would be clinically insignificant, whereas a difference of ‘two lines’ may be unacceptable. Deciding these limits can also allow the researcher to determine the required number of participants that will be needed in the study (power calculation). describe how the analysis to compare the tests should be carried out, once the two measurements have been obtained appropriately (e.g. in randomised order; matched viewing conditions).


Fig. 3.1 shows an example ‘Bland–Altman plot’ from such a study, with each data point representing the acuity of one eye from one participant.




Fig. 3.1


The Bland–Altman plot shows (on the y -axis) the difference between the numbers of letters read on the new acuity chart (New Test) and the traditional acuity chart (Gold Standard), and on the x -axis the average of the two results. The dashed line shows the mean difference (bias) and the dotted lines show the limits of agreement (mean ± 2 SD). SD , Standard deviation.


This particular plot shows a systematic difference between the two tests (the mean difference (bias) is not zero) and the subjects on average read about three more letters on the New Test than on the Gold Standard. Considering the individual differences, the standard deviation (SD) was 6.5 letters. Therefore, we would expect 95% of the differences to lie within 1.96 (or approximately 2) SD of the mean. So nearly all differences between pairs of measurements will be within the boundary of mean + 2 SD (3 + 13 = 16) to mean − 2 SD (3 − 13 = −10), and these are called the 95% LoA. It is the LoA which are often much more indicative of the limitations of a technique than the bias, which even for a test giving very variable results, can be close to zero. In this example, we can see that the New Test can give acuity results which differ by several lines (both better and worse) compared to the Gold Standard, and the two tests would certainly not be interchangeable.


Note that the simpler method of performing a linear regression between the two tests is not appropriate: although this will show whether the two tests are related, it will not show any differences in scoring between the two tests.


Distance visual acuity


The most familiar test of spatial vision is the resolution task presented by a distance acuity chart (traditionally a Snellen design), which determines the ability to discriminate the smallest possible symbols (optotypes) at the highest contrast. Contrast is ( L maxL min )/( L max + L min ) where L max and L min are respectively the maximum and minimum luminances in the target: for black optotypes on a white background (or vice versa) this is recommended to be at least 75%: the letters should have luminance no greater than 15% of the background level ( ). The standard optotype is the Landolt ring (or ‘C’) with a choice of at least four gap positions, but it is more common to use letters as optotypes. In discriminating a letter the viewer is detecting the gap between adjacent limbs that make up the letter. Performance is defined in terms of the angular subtense of the ‘gap’: 1.75 mm at 6 m subtends 1 minute arc (min arc), which is taken to be ‘normal’ performance. The symbols, or optotypes, have standardised shapes, being drawn within a 5-unit × 5-unit square with limb widths equal to 1 unit, and the overall letter height equal to 5 units ( Fig. 3.2 ).




Fig. 3.2


A schematic illustration of normal visual acuity, where the optotype which can just be resolved, has a distance between adjacent limbs of 1.75 mm, which subtends an angle of 1 min arc at the nodal point of the eye (N) when viewed from 6 m.


There are several ways in which the acuity measured with optotypes can be expressed, and these are illustrated in Table 3.1 . The Snellen fraction (which for ‘normal’ vision is often taken as 6/6) has the viewing distance as the numerator, and the denominator is the distance from which the stroke width of the letter would subtend an angle of 1 min arc at the eye (the angular subtense of the complete letter would be 5 min arc): the lines of letters on commercially produced charts are labelled with this latter distance. Snellen (6 m) defines the standard viewing distance as 6 m, and Snellen (20 ft) simply converts this into feet. The 6/6 acuity standard can therefore be described as requiring a minimum angle of resolution (MAR) of 1 min arc. Decimal notation expresses the Snellen fractions as a decimal, or alternatively it is the reciprocal of the MAR. LogMAR is the logarithm to base 10 of the MAR in min arc. In the logMAR system each step (line) is 0.1 log units, and this is equal to an approximate multiplication factor of 1.25, and this represents the size difference between the limb widths making up the corresponding letters. Each additional 0.3 logMAR represents a doubling in optotype size: on a conventional chart this means letters on each line are twice the size of letters on the row three lines below, and half the size of optotypes on the row three lines above.



Table 3.1

The Interrelationship Between the Different Acuity Notations


























































































































































MAR (min arc) logMAR Snellen (6 m) Snellen (20 ft) Decimal Notation
100 2.0 6/600 20/2000 0.01
79 1.9 6/480 20/1600 0.0125
63 1.8 6/380 20/1250 0.016
50 1.7 6/300 20/1000 0.02
40 1.6 6/240 20/800 0.025
32 1.5 6/190 20/630 0.032
25 1.4 6/150 20/500 0.04
20 1.3 6/120 20/400 0.05
15.8 1.2 6/95 20/320 0.063
12.5 1.1 6/75 20/250 0.08
10.0 1.0 6/60 20/200 0.1
8.0 0.9 6/48 20/160 0.125
6.3 0.8 6/38 20/125 0.16
5.0 0.7 6/30 20/100 0.2
4.0 0.6 6/24 20/80 0.25
3.2 0.5 6/19 20/63 0.32
2.5 0.4 6/15 20/50 0.4
2.0 0.3 6/12 20/40 0.5
1.58 0.2 6/9.5 20/32 0.63
1.25 0.1 6/7.5 20/25 0.8
1.0 0.0 6/6 20/20 1.0
0.8 −0.1 6/4.8 20/16 1.25
0.63 −0.2 6/3.8 20/12.5 1.6
0.5 −0.3 6/3 20/10 2.0

MAR , Minimum angle of resolution.


Acuity notations can be converted between these different systems, but this is based solely on letter size and the angular subtense of the limb widths. It does not take into account any variation between charts in the shape of letters, recognition difficulty of letters chosen, letter spacing and line separation, and number of letters on each row. All of these factors could affect an individual patient’s measured acuity. In general, it is not appropriate to express acuity measured in Snellen as logMAR values, although Snellen equivalents can be given for measurements made on a logMAR chart ( ).


The standard for ‘normal’ acuity (6/6) is the ability to correctly resolve and identify letters whose limb width subtends an angle of 1 min arc at the eye. However, a high proportion of young subjects can achieve a performance which is considerably better than 6/6 ( ). High-contrast VA is widely used by optometrists because it is an excellent way to take ‘baseline’ measurements: to compare performance, for example, with and without the use of spectacle lenses, as it is extremely sensitive to blur. It can also be used to confirm that a magnifying device is producing the expected improvement in performance: if, for example, the patient uses an aid labelled as ‘2×’ magnification, the retinal image will be twice the size. This means that the patient will be able to recognise letters from the test chart in which the detail has one-half the angular subtense. Acuity should therefore improve by a factor of 2: for example, an acuity of 6/36 would be expected to improve to 6/18 with a 2× telescope. A factor of 2 is a change of 0.3 log units, so 2× magnification would increase logMAR VA from, for example, 0.7 to 0.4.


In determining a baseline acuity level for a low-vision patient, however, standard Snellen charts are not the most suitable test, as they are designed to measure normal or near-normal acuity. This means that the ratio of letter size between adjacent rows is much smaller as the higher acuity levels are approached; there is, for example, a 1.2× increase in size from 6/5 to 6/6, compared to the 1.67× change from 6/36 to 6/60. The number of letters per line also varies from one to eight in moving down the chart. It is well-known that spatial resolution for letters is influenced by the presence of adjacent contours which are closer than a letter-width distance away ( ). This ‘contour interaction’ effect is not well controlled in the Snellen chart because of the different letter spacings on each line, and an optimal design would demand that the spacing between letters and rows be proportional to the letter size throughout. The presence of surrounding contours around the target letter may be particularly confusing in the case of a patient with central scotoma: with an isolated letter the patient can search and locate it more accurately.


Early attempts to produce an acuity chart for people with low vision concentrated on providing very large targets—the Feinbloom chart uses numbers up to a ‘210 m’ size (i.e. the detail in the letter subtends 1 min arc at 210 m, and it could be seen by someone with 6/6 vision at that distance). The Sloan chart uses a constant size progression from row to row (1.25×, or 0.1 log unit), but the number of letters and their spacing varies at each level ( ).


The Bailey-Lovie chart was the earliest attempt to fulfil all the requirements to measure ‘low vision’ in a commercially available design. It can easily be used at different working distances, has equal numbers of letters per line (5), equivalent line and letter spacing throughout, the 10 letters used have approximately equal legibility and there is a standard ratio of size between adjacent rows (1.25×, or 0.1 log units) ( ). The scoring of VA on a Snellen chart is often on a row-by-row basis, with the patient given credit for a row if they read the majority of letters on it (although results such as ‘6/18 + 2’ are sometimes recorded). This grading is relatively coarse and insensitive to change (the patient may read an extra half-line but achieve the same score), and letter-by-letter scoring is preferred. On a logMAR chart, the 0.1 ‘credit’ for reading a full row of (usually) five letters can be subdivided into 5 × 0.02 steps for reading each individual letter. Remembering that logMAR scores decrease for an improvement in performance, reading the line labelled 0.7 plus 3 out of the 5 (smaller) letters on the 0.6 line would lead to a final score of 0.64.


In low-vision work, the Snellen chart is frequently presented at different viewing distances, leading to recorded acuities such as ‘2/36’ or ‘1/60’. The numerator expresses the actual viewing distance used, with the denominator giving the distance from which a ‘normal’ observer would be able to recognise the letter: in fact it would be labelled with this latter value on the chart. In the logMAR system, the viewing distance does not form part of the notation, and must be accounted for separately: the chart must also be labelled with the viewing distance for which it is calibrated. If the viewing distance from the chart decreases by a factor of 0.1 log units, then all the letters on the chart should be effectively magnified by that same factor. If the patient could read the line labelled 0.7 at 6 m, his acuity now (at a 4.8 m viewing distance) should be 0.6. If the viewing distance decreased by 0.2 log units (to 3.8 m) the acuity should improve to 0.5 logMAR. Although the patient’s acuity is now apparently 0.5, this has been enhanced by the closer viewing distance, and must be compensated to give a correct acuity assessment. Thus, if you decrease the viewing distance by 0.2 log units, the logMAR acuity recorded should increase by 0.2 log units: as in the example given, a logMAR acuity of 0.5 recorded at 3.8 m is ‘really’ 0.7. The sequence of viewing distances that represent progressive 0.1 log unit steps does not need to be committed to memory, as it is given in the Snellen notation labelling on the chart. The distances from which the detail in the letters subtends 1 min arc are labelled for successive rows as 60, 48, 38, 30, 24, 19, 15, 12, 9.5, 7.5, 6, or by dividing by 10: 6, 4.8, 3.8, 2.4, 1.9, 1.5, 1.2, 0.95, 0.75 and 0.6. To take an example, suppose a patient has an acuity recorded as 0.5 logMAR at a viewing distance of 3.0 m. This represents a 0.3 log unit change in viewing distance from 6 m (3 steps of 0.1 log unit each on the scale given), so the acuity must be compensated by adding 0.3 to the recorded acuity: the patient’s VA is therefore actually 0.5 + 0.3 = 0.8 logMAR. A 0.3 log unit decrease of the viewing distance (or to put it more simply, a halving of the viewing distance) and a subsequent increase in the recorded VA by 0.3 is in practice usually sufficient to deal with the majority of acuities encountered. If not, a further halving of the viewing distance (to one-quarter its original value) and an increase in the measured logMAR acuity by 0.6 (0.3 + 0.3) can be used.


The number of letters presented at each size on the Bailey-Lovie chart is illustrated in Fig. 3.3 , in comparison to a standard Snellen chart. It can be seen that a patient with an acuity of logMAR 0.6 (6/24) would have had the opportunity to read 25 letters on the Bailey-Lovie chart, compared with only 6 on a standard Snellen chart. As well as giving a more accurate assessment of acuity, this must increase the confidence of the patient, and allow better comparisons of the clarity of letters during subjective refraction. The letters used are those chosen by the British Standards Institution for the 1968 Standard for Test Charts, and there is no ‘O’ or ‘C’: this can lead to difficulties in finding a target for the subjective confirmation of astigmatic correction. The chart is also larger and of different shape to the more traditional Snellen chart, and therefore requires a specific internally illuminated cabinet.




Fig. 3.3


The cumulative number of letters which will have been presented to, and read correctly by, a patient achieving a given acuity level on a standard Snellen or Bailey-Lovie/ETDRS (Early Treatment Diabetic Retinopathy Study) letter chart. The fixed geometric decrease (0.1 log unit, 0.8×) in letter size on adjacent rows reading down the chart for the Bailey-Lovie/ETDRS chart is also shown. MAR , Minimum angle of resolution.


More commonly used at the present time is the ETDRS chart ( Fig. 3.4 ).




Fig. 3.4


(A) High-contrast ETDRS chart and (B) Low-contrast ETDRS chart, in illuminated cabinet. (The appearance of a low-contrast chart has been created photographically for illustration purposes.)


This was developed due to a need to have several equivalent versions of the same chart design which could be used when taking multiple measurements during a clinical trial: it is named after the ‘Early Treatment Diabetic Retinopathy Study’, for which it was developed ( ). These charts use combinations of the 10 Sloan letters (S, O, C, D, K, V, R, H, N, Z) which have similar, although not identical, recognition difficulty. The letter combinations for each line on the chart are therefore chosen so that the summed difficulty for the five letters combined, differs by less than 1% between lines. The standard viewing distance of the chart is 4 m, and this can be halved to 2 m or 1 m as required (for visually impaired observers). If refraction is carried out with letter charts at these distances, then the subjective prescription will be over-plussed by an amount equal to the dioptric distance of the chart. At 2 m, the prescription would be +0.50 D (reciprocal of 2 m) relative to a correction for infinity (as determined by retinoscopy) (e.g. −2.00 DS rather than −2.50 DS). Similarly, to obtain the optimum acuity using a chart at a closer distance may require the ‘distance’ refractive correction to have a compensating plus lens added (e.g. add +1.00 to the distance prescription to measure the VA at 1 m).


Although many charts are now available based on these principles (including those designed for presentation on electronic screens), care must be taken in describing them as Bailey-Lovie or ETDRS charts. In most cases it is true that they are logMAR charts, but they may not share other important design features (number and spacing of letters; letter shapes; range of letters used).


The Bailey-Lovie and ETDRS charts were very specifically designed to standardise the effect of surrounding contours (crowding) on visual perception. It is therefore important that extra contours are not introduced by the practitioner pointing to letters to orientate the patient. Requiring the patient to find ‘the first letter’ or ‘the third line’ also gives the practitioner the opportunity to find out about the patient’s localisation ability: poor performance can suggest the presence of scotomas. The presence of a scotoma which stops the patient from reading a large part of the chart also creates problems for scoring logMAR acuity, because the VA recorded with the missing letters accounted for does not give an accurate representation of the resolution ability of the patient. For clinical purposes, it would be more accurate to record VA descriptively such as ‘only left half of chart seen, first three letters of logMAR 0.3 correct’. Some practitioners, especially within clinical trials, use a letter-counting method to quantify VA by the total number of letters read, but this is very difficult to interpret without detailed knowledge of the measurement protocol ( ).


In the same way that the Bland-Altman method can be used to assess agreement of two different tests, it can also be used to assess the agreement of a test with itself—that is, its repeatability. In this case, the mean difference (bias) can again be measured (e.g. the patient may perform better on the second attempt due to familiarity), and the variability is expressed as the ‘coefficient of repeatability’. For patients with normal acuity it can be 0.07 logMAR (3.5 letters better or worse on second attempt) ( ), whereas for those with macular degeneration it may be as much as two ( ) or three ( ) lines.


Assessing Ultra-Low Vision


With a conventional letter chart which can be moved close to the patient, the poorest level of VA which can be quantified with a Snellen chart designed for 6 m would be 0.5/60 (6/720 or logMAR 2.08), by bringing the chart to 50 cm. For a 4 m ETDRS chart at 50 cm, the poorest VA recordable would be about logMAR 1.9. Note that in both cases, any potential defocus from the close viewing distance would be very unlikely to impact VA of this level. If the chart is not resolved at this distance, the traditional optometric approach to describe vision is to use ‘hand movements’, ‘perception of light with/without projection’ (projection being the ability to determine the direction from which a light is shining), and then ‘no perception of light’. Such ‘tests’ of performance are nonstandardised, and can also be psychologically demoralising for the patient, suggesting their vision is so poor that methods to measure it do not exist. The Berkeley Rudimentary Vision Test is a simple clinical test which has been proposed to attempt to quantify these very low levels of visual performance, once the limits of a letter chart have been reached ( ). The test is presented manually by the clinician on double-sided cards, where the visual tasks get progressively easier. First single letters are used (consisting of a single letter ‘E’ of four different sizes, whose orientation can be changed to alter the task), firstly at 1 m, and if not seen then at 25 cm. If this test is not possible, the target is changed to a square-wave grating of four different stripe widths, whose orientation must be identified at 25 cm. The next grade of vision is tested with ‘white field projection’ using a card which is half black and half white, or all black with a white quadrant. Finally, ‘black-white discrimination’ is tested using a card which is white on one side and black on the other. For each of the tests, rotating the cards allows many different target presentations to be made by the clinician.


If the acuity drops below the level measurable with a letter chart, it is very unlikely that vision enhancement will be possible. However, recently a new area of rehabilitation of this ‘ultra-low vision’ has begun to develop involving individuals who have undergone various forms of visual restoration. This restoration may involve, for example, gene therapy, or an ocular or cortical ‘bionic implant’, often for an individual with a hereditary retinal degeneration. In these cases, it may be important to carefully document visual performance before and after the procedure. The way in which those with bionic implants learn, or are taught, how to access and use the novel visual information available, is a field which is still in its infancy.


Contrast sensitivity


Despite the usefulness of VA as a performance measure, it is not a complete description of visual performance as it does not deal with the ability to detect large objects and low contrasts. CS does test the ability to detect such objects. As these are important components of the ‘real’ visual world, it is claimed to provide a better assessment of the patient’s true functional vision. The contrast sensitivity function (CSF) represents the reciprocal of the contrast detection threshold for sine-wave gratings (alternate light and dark bars) of variable spatial frequency (cycles/degree) and contrast. Sine waves are used because they are the simplest spatial stimulus, and more complex luminance distributions can be Fourier-analysed into a series of sine-waves: the response of the visual system to a complex pattern can be predicted from its response to the component sine-wave stimuli.


CS for gratings is a much more fundamental, lower-level visual task, involving simple resolution of the presence of the grating, compared to the higher-level recognition/identification task which is required when letters have to be named in a traditional VA test. Nonetheless, the angular subtense of the two tasks can be equated. If the patient is able to detect a grating then they can distinguish that the black bars are separate, so the gap between them (i.e. the white bar) will subtend 1 min arc at the eye (by analogy with the threshold for logMAR 0.0 [6/6] letter acuity). Thus, 1 cycle of the grating would subtend 2 min arc or 1/30 degree, and 30 cycles would subtend 1°. Thus, a patient with 6/6 vision for high contrast Snellen letters should be able to detect a 30 cycle/degree grating at maximum contrast: this highest spatial frequency which can be detected represents the cut-off or limit to detection ability where CS becomes minimal (and equal to 1, the reciprocal of the maximum grating contrast which also equals 1). In the same way, logMAR 0.3 (6/12) would be equivalent to 15 cycles/degree, for example, and this letter target can be schematically represented on the same axes as the CSF. Changes in target size are indicated by shifts along the x -axis (spatial frequency), whilst different contrast levels are represented on the y -axis. The peak sensitivity usually occurs around 3–5 cycles/degree with a lower sensitivity for both higher and lower spatial frequencies. This gives a characteristic inverted-U-shaped curve, shown in Fig. 3.5 .




Fig. 3.5


The solid line represents a ‘normal’ contrast sensitivity function. The dashed line shows the theoretical response of Patient 1 with a loss of sensitivity to low- and medium-spatial frequencies, but normal sensitivity to high-spatial frequency (and therefore near-normal visual acuity). The dotted curve indicates a loss of high-spatial frequency sensitivity (resulting in poor acuity) in Patient 2 but normal sensitivity for low-spatial frequency targets.


Having a low contrast threshold for detection of the grating (being able to see it even when presented with minimal contrast, which could be approximately 0.5% for a grating of the optimal spatial frequency) indicates a very high CS (1/0.005 = 200; log CS = 2.3). Clinically the CSF has been used to detect patients with normal/near-normal VA yet complaining of subjective visual difficulties: such a case is illustrated in Fig. 3.5 ( ). The two curves represent the response of two different patients, who obviously have different contrast sensitivities. Although Patient 1 has a reduced sensitivity to low spatial frequencies, the higher frequencies can be detected almost normally, and acuity is high. Nonetheless this subject may have significant visual problems due to the marked differences in the ability to detect large low-contrast objects. Similarly, it is possible to envisage patients with almost equal functional ability (as the detection of low-spatial-frequency, low-contrast targets is equivalent), yet very different acuity for high contrast optotypes (as the high-contrast threshold for high-spatial-frequency targets is different).


In the presence of refractive blur created by uncorrected ametropia, the acuity loss is equivalent for both high- and low-contrast targets, since the blurred image has the same contrast as the target which created it. This is not equally true, however, of the diffusive blur produced by media opacity (such as cataract) ( ). Here there is scattering of light within the eye, and the effect of this is to increase the luminance of the retinal image in both the light and the dark areas, thus reducing the effective contrast. These patients often complain of ‘faded’ or ‘washed-out’ vision due to scattering of light within the eye reducing image contrast. These complaints are often out of proportion to the VA which is still relatively good: as this is a high-contrast target, the patient can still recognise the letters despite a reduction in contrast. The same contrast attenuation applied to a low-contrast target is likely to take it below threshold. Thus, the low-contrast performance is likely to be more representative of ‘everyday’ visual complaints than that of high-contrast, and CS is a very useful clinical test.


Despite its potential usefulness, measurement of the full CSF is fraught with difficulties in the clinical setting ( ). Traditional methods are time-consuming, and it can be difficult to decide what constitutes ‘abnormality’ (because it is difficult to combine the results for all spatial frequencies into a single score). Computer-based methods such as the ‘quick-CSF’ ( ) are impressive, but are not yet widely adopted in clinical practice. A further problem is that there is a marked effect of the field-of-view on the CSF ( ). The work of Hess ( ) suggested that CSF results in low vision must be interpreted with great care. It appears that if the patient has a field defect that obscures part of the grating display, this alone is enough to produce a characteristic loss of CS. With a simulated central scotoma in a normally sighted subject, a loss of sensitivity to high spatial frequencies is induced; peripheral field loss gives a low spatial frequency loss, and a mid-peripheral annular scotoma induces a loss at medium spatial frequencies, whilst high and low spatial frequencies are unaffected.


Clinical Tests


There have been a number of attempts to produce practical CS tests suitable for use in clinical practice. These clinical tests generally restrict the range of spatial frequencies over which testing is carried out, so a full CSF is not obtained. In patients with visual impairment such tests are very useful for what they can suggest about functional performance of everyday tasks. However, such tests are also used clinically as a screening device to detect the early stages of pathology before the acuity is impaired.


Available clinical tests fall into three categories:



  • 1.

    Sinusoidal gratings at limited number of frequencies


    This approach to testing the CSF is illustrated schematically in Fig. 3.6A which shows how the test targets sample the CSF curve, with the chart appearance shown in Fig. 3.6B . This was the design used in the Vistech Vision Contrast Test System (VCTS) chart ( ) and the related Functional Acuity Contrast Test (FACT).




    Fig. 3.6


    (A) A schematic representation of the way in which contrast sensitivity is measured by a test such as (B) the FACT (Functional Acuity Contrast Test) chart.

    (From Hitchcock, E.M., Dick, R.B., & Kreig, E.F. (2004). Visual contrast sensitivity testing: a comparison of two F.A.C.T. test types. Neurotoxicology and Teratology , 26, 271–277.)


    Each row of the chart represents one spatial frequency, with the columns representing decreasing contrast levels moving from left to right. The gratings are oriented vertically, or tilted right or left. The patient ‘reads’ along each row from left to right, progressing from high-contrast to low-contrast targets, reporting on the orientation of the grating pattern in each disc. This ‘forced choice’ procedure is a robust psychophysical task, designed to minimise the effect of different patient criteria, as some patients would not report the direction or presence of a grating unless they were absolutely sure, whereas others respond positively when much less certain. Such a test is much simpler, quicker (because only a limited number of spatial frequencies are tested), and cheaper than electronically generated sinewave stimuli. However, in this case, the high chance of a correct guess (33%), combined with only one presentation of each target, leads to variability in results ( ): found that significant changes in performance occurred on successive sessions that were due to chance rather than to real changes in the patient’s vision. A further disadvantage is the limited number of contrast levels which can be presented, leading to a risk of ‘ceiling’ or ‘floor’ effects: individuals with subtle CS defects can see all the targets, whereas those with moderate CS loss cannot see any targets ( ).


  • 2.

    Low-contrast VA


    In this case the patient is required to read letters of decreasing size at one or more fixed levels of contrast ( Fig. 3.7 ).




    Fig. 3.7


    A schematic representation of the way in which contrast sensitivity is assessed by a test comprising letters of variable size in a high-contrast and a low-contrast version. An example of such charts is shown in Fig. 3.4 .


    The smallest letter size which can be read at a given contrast level is representative of the cut-off spatial frequency at that level—the point where a horizontal line representing that contrast level intersects with the CS curve. The characteristic shape of the CSF (especially the slope of right-hand edge), is apparent as a difference in acuity at two different contrast levels. used 100% and 10% contrast, whereas recommended 96% and 7% (see Fig. 3.4 ). Even people with good vision show a predictable reduction in VA at the lower contrast level. The presence of a visual loss which changes the slope of the high-frequency portion of the CSF would be detected by a disproportionately poorer acuity for the low-contrast letters. describes such responses in patients with glaucoma and multiple sclerosis, even though their high-contrast acuity remains normal. This type of test has therefore been recommended as useful for screening for eye disease during routine eye examinations. The Colenbrander Mixed Contrast Card Set is a near vision chart with adjacent high- and low- (10%) contrast letters and sentences presented at progressively decreasing sizes. The patient is asked what is the smallest print that can be read in black text, and the smallest print that can be read in grey text (which is expected to be bigger). A two- to three-line difference in VA is typical of normal subjects, but differences up to 10 lines have been identified in some patients ( ).


  • 3.

    Test of peak CS



To measure peak CS, the patient views large, easily seen detail (to be near the spatial frequency corresponding to peak sensitivity) at variable contrast to establish the detection threshold ( Fig. 3.8 ). The aim is to determine a single value which represents the minimum contrast required in order for a target to be visible. The best known tests in this category are the Pelli-Robson (PR) Low Contrast Letter Chart ( ) and Mars Letter Contrast Sensitivity Test ( ). These are both tests which are printed (rather than being backlit in an illuminated cabinet) and therefore they rely on the correct amount and uniformity of illuminance from an external source in order to create the labelled contrast levels: the instructions in the user manuals supplied with these tests should be followed carefully to ensure accurate results. Rather than gratings, both use letters as their targets (and the Mars test is also available with numbers in order to be language neutral). Provided the letters are large, the peak spatial frequencies are well-represented in the image.




Fig. 3.8


(A) A schematic representation of the way in which contrast sensitivity is assessed by a test consisting of letters of fixed size and variable contrast. (B) The Mars Letter Contrast Chart is an example of a test of this design.


It has been claimed that letters are more appropriate than gratings, as they are more familiar to the patient, are relatively easy to produce in comparison to sine-wave gratings, and they allow simultaneous testing of detection at all orientations (whereas gratings only test at one, usually vertical). There is, however, a major difference between the task of detection of (large field) gratings compared to recognition of isolated letters, so the two are not directly comparable. The general design of both the PR and Mars tests is the same, with the Mars specifically being designed to overcome some of the disadvantages of the PR. The Mars chart is small enough to be hand held and is designed to be viewed from 50 cm with the habitual reading correction. The PR is wall mounted, so exposure to light and dust change the contrast over time. It is viewed from 1 m (although it was originally intended to be at 3 m), and some clinicians recommend using a +0.75 add for people with advanced presbyopia. Each test displays eight rows of letters. There is only one version of the PR test, but the Mars test has three versions, so providing an easy means to test both eyes monocularly, and binocularly. The latter can be stored in a protective case to keep it clean, and the smaller size makes it easier to achieve even illumination.


The tests are calibrated to measure log CS.


<SPAN role=presentation tabIndex=0 id=MathJax-Element-1-Frame class=MathJax style="POSITION: relative" data-mathml='logCS=log100contrastthreshold(%)orlogCS=-logC’>logCS=log100contrastthreshold(%)orlogCS=-logClogCS=log100contrastthreshold(%)orlogCS=-logC
logCS=log100contrastthreshold(%)orlogCS=-logC


where contrast threshold is the contrast of the faintest letter which can be recognized. Contrast here is Weber contrast:


<SPAN role=presentation tabIndex=0 id=MathJax-Element-2-Frame class=MathJax style="POSITION: relative" data-mathml='Contrast=Ltarget−LbackgroundLbackground’>Contrast=𝐿𝑡𝑎𝑟𝑔𝑒𝑡𝐿𝑏𝑎𝑐𝑘𝑔𝑟𝑜𝑢𝑛𝑑𝐿𝑏𝑎𝑐𝑘𝑔𝑟𝑜𝑢𝑛𝑑Contrast=Ltarget−LbackgroundLbackground
Contrast=Ltarget−LbackgroundLbackground


To take an example, contrast threshold C = 0.021 (2.1%), CS = 47.6, log CS = 1.68


Different ways to score the PR test have been recommended over the years. Each triplet of letters differs by 0.15 log units from the preceding triplet: to achieve the score for that triplet the patient has to read two out of the three letters correctly. However, some clinicians have scored it ‘by letter’ and counted 0.05 log units for each letter. For the Mars test each successive symbol decreases in contrast by 0.04 log units reading from the top left of the chart, so genuine scoring by letter is used.


Because CS testing involves detection rather than recognition, it is customary to allow patients to read ‘O’ for ‘C’, and vice versa ( ). The different nature of the task must be explained to the patient: near threshold they should be encouraged to blink and move their eyes, and guess the symbol if any shape is vaguely seen. This is very different to threshold for VA, when the black letter is still clearly detected, but its internal detail cannot be discriminated.


There is also a CS test available as part of the Thomson Test Chart 2000 Software (Thomson Software Solutions, Welham Green, UK), based on the PR, but only displaying one triplet of letters at a time. This may give different, and less repeatable results than either the PR or the Mars test: it is thought this may be due to the higher screen luminance, and the difficulty of displaying low contrasts accurately on a computer monitor which tends to artificially boost low-contrast features ( ).


Other contrast detection tests which seek to determine a single ‘peak sensitivity’ are the so-called ‘edge’ tests. The first of these was the Melbourne Edge Test ( ). The patient is presented with a circular target which is divided across the middle and has a different intensity on each side of the border: the orientation of the border must be detected and it may be oriented vertically or tilted slightly to right or left. Contrast is progressively reduced until the border can no longer reliably be distinguished. Here the stimuli are back-illuminated on a light box, and the results obtained do depend on the luminance of the test.


CS tests whose repeatability has been confirmed experimentally, can be used to monitor changes in performance with time, or interocular differences, which may not have been detected by high-contrast VA tests. The tests may also, for example, be used to confirm the patient’s subjective impression that their vision has deteriorated, despite the preservation of optotype acuity; or to decide which eye might perform most effectively when viewing through a magnifier when the monocular VAs are equal. A further use of CS tests is to demonstrate to relatives and carers the difficulties which the person with low vision experiences: for example, it can be very useful to show that they can only see high-contrast letters on a PR chart and then explain that faint objects will be invisible to them.


In summary, CS is a very useful descriptor of the visual status, and provides distinct information compared to that offered by other tests. Although it is a threshold test, it may help to quantify and explain the patient’s functional difficulties. A lowered CS is likely to impair the patient’s ability to see steps, curbs, and irregularities in the pavement, for example, because their recognition depends on the detection of relatively small contrast differences between features with large angular subtense ( Fig. 3.9 ).




Fig. 3.9


(A) The low-contrast posts at this pavement edge are difficult to detect, particularly when contrast sensitivity is impaired (as in (B)). The lamp-post is easier to detect because of its greater contrast with the background.


Seeing faces is a task of great practical significance to the patient: this applies both for recognising friends in the street, but also for its role in face-to-face communication where gestures, eye contact and mouth shapes corresponding to letters are all useful cues, especially to an older person whose hearing may also be impaired. found that for acuity less than 6/180, even the most robust facial cues (head nodding and shaking) could not be seen with a 1 m viewing distance: but with acuity better than 6/24, even subtle facial cues such as eye contact were seen by all subjects. However, faces are low-contrast targets, and found that speechreading ability was impaired by even subtle losses of CS (a simulated log CS of 2.0 relative to the normal level of 2.3). On a practical level, in the consulting room, auditory and tactile back-up to visual gestures must be used: where the patient’s attention would usually be engaged by eye contact, this might be backed up by a gentle touch on the hand; encouragement during history-taking might be given by both head nodding and verbal signals. Reduced CS also has a considerable effect on reading performance (see Reading Section), which may seem counter-intuitive as this task usually involves small detail at relatively high contrast. In fact there are very few everyday tasks whose performance is not impacted by reduced CS: from competitive rifle shooting ( ) to judging the location and distance of objects ( ), to take just two examples.


Predicting the outcome of cataract surgery


Just because the patient has one pathology which is treatable (cataract) does not mean that there is no other disease present. Either corneal or retinal comorbidity could mean that on removal of the cataract only limited improvement in vision will be achieved. In some cases, the surgery is required as part of treatment (e.g. in glaucoma management) or to allow monitoring of retinal disease, even if vision is not improved. However, in some cases, the patient will undergo the inconvenience and potential risks of the surgical procedure, with the possibility of a disappointing visual result. If resources are limited, it may be necessary to restrict treatment to those patients with a realistic prospect of visual improvement, or to determine which of their two affected eyes has the best visual prognosis. For both the surgeon and the patient, then, a realistic assessment of the ‘potential vision’ is required and a wide range of subjective and objective visual tests has been proposed ( ).


Some of these methods have been devised to try to ‘bypass’ the opacity. The Potential Acuity Meter (PAM) is a slit-lamp attachment in which an optical system is used to project the image of a Snellen chart directly onto the macula ( ). A lens forms an image of the illuminated letter chart which has an area only 0.15 mm in diameter in the plane of the patient’s pupil, and the examiner directs this to an observable ‘window’ in the opacity. The patient (wearing their refractive correction) sees the chart when they look down the beam and reports the lowest line which can be read. In laser interferometry, two coherent laser beams are directed through ‘windows’ in the opacity ( ). When these coincide on the retina they interfere, with coincident troughs in the wave forming dark areas, and peaks creating light zones. The result is a grating of light and dark bars whose spatial frequency can be varied by altering the separation of the laser beams. Unfortunately, it does require windows in the opacity (and at different spacings depending on the spatial frequency to be created), and the image is very bright and large, so it can often be detected even if a macular scotoma is obscuring or distorting part of it.


If two grating patterns of black-and-white stripes are overlapped, a regular pattern of dark stripes is seen in a different direction to those in the original pattern. By varying the relative orientation of the two patterns, the spatial frequency of the resultant ‘Moiré fringes’ can be altered: a threshold can be determined if this is increased until the patient can no longer detect it. used this method in his ‘Visometer’ to measure acuity for a fringe pattern produced on the retina when white light was projected via two rotatable gratings through apertures in the opacity.


The idea of allowing image-forming rays to pass through a window in the opacity has led to suggestions of using a form of pinhole acuity to assess potential vision. The standard pinhole test used routinely in subjective refraction is unlikely to help because the loss of light limits performance. However, a modified version has been developed which has been called the ‘potential acuity pinhole’, ‘super pinhole’ or ‘illuminated near chart’. In each case, the requirements are that the patient has a dilated pupil (and that in itself may be beneficial in revealing improvement), an occluder with multiple 1-mm pinhole apertures to place close to the eye, and a very brightly illuminated test chart (usually at near) ( ). The patient can move the pinholes to ‘search’ for the best acuity; the test is quick, inexpensive, and easy for the patient to carry out ( ); and optimum refractive correction is not required.


The clinical usefulness of any of these tests lies of course in how well they can predict the postoperative acuity, and none are particularly effective: it should be emphasized that the majority of research in this area has involved predicting acuity in patients with no additional eye disease ( ). found that for eyes with moderate cataract and comorbidity, only their super-illuminated pinhole test (a white-on-black distance letter chart) was a better predictor of the outcome than the clinical judgement of experienced cataract surgeons. In all techniques, the predictions become less accurate as the lens opacity becomes more dense, but these are precisely the cases where the fundus cannot be visualised and the techniques would have greatest value. However, it does seem that these techniques rarely overestimate the potential acuity in such cases, so a good preoperative result is unlikely to be due to artefact and does suggest a successful outcome.


Visual field testing


Peripheral Visual Fields


At its most basic, visual field testing determines the total extent of the peripheral vision of the patient, and their ability to detect an object whose image is falling on peripheral retina. There are various ‘qualitative’ visual field measures which rely on practitioner judgement ( ). In the low-vision context, the most useful is one described as ‘kinetic boundary perimetry’, using a bead (5–10 mm in diameter, and red or white in colour [whichever gives the best contrast with the surrounding walls]) on top of a black rod about 30 cm in length. The examiner sits opposite the patient, and the patient views the examiner’s eye which is directly ahead: testing can be binocular, or monocular with the fellow eye well covered. The examiner brings the target around from behind the patient who reports when the bead target is first seen, as it moves from non-seeing to seeing, and then if it disappears as it is brought radially towards the direction of gaze. Bringing the target from above, below, temporal and nasal allows these extremes of the field to be determined. Because defects which respect the vertical and horizontal midlines are relatively likely (e.g. hemianopia, or quadrantanopia), it is better for the examiner to move the target slightly ‘off’ these meridians, rather than directly along them (e.g. to move the target from ‘11 o’clock’ and ‘1 o’clock’, rather than from ‘12 o’clock’). Although there are many automated visual field instruments (e.g. the Humphrey, Henson, Medmont, and Dicon ranges), these are not well suited to measuring functional visual fields. These instruments concentrate visual field testing within the central 30°, are designed to test monocularly, and, most importantly, are designed to pick up small changes in threshold sensitivity. If such a device is to be used in low-vision rehabilitation, the best option is to use a suprathreshold binocular test such as the binocular Esterman programme, which is usually included as it is used to assess compliance with the visual field requirements for driving in the UK. This test is considerably suprathreshold (targets are around 100× brighter than threshold), and 120° extent of the horizontal field is explored. Any automated programme is, however, best avoided for individuals with very severe reduction in visual field (e.g. tunnel vision): they can spend several minutes during the test when they are not responding, as targets are repeatedly presented to non-seeing areas of their visual field, which they can find very distressing. An accurate extent of the visual field for such patients is best obtained using the Amsler chart, which covers the central 20° of the visual field.


One would expect the visual field extent to be related to the ability of individuals to navigate their environment. used a real obstacle course to find out which areas of the visual field were necessary for safe mobility and orientation. They found that the central 37° radius, and the right, inferior and left annulus between 37° and 58° were the most significant zones. The integrity of the extreme periphery of the field was not particularly significant. and also found that visual field, and (grating) CS, were good predictors of orientation and mobility performance. Central and peripheral fields are both related to self-reported visual function, with the peripheral field (beyond 30°) being a slightly better predictor of mobility function ( ).


Central Visual Fields


Central field loss is very common in people with low vision, and this will affect multiple aspects of visual function. Although the automated visual field screeners described previously can be used to measure central visual fields, there are many limitations with using them to assess field loss (as opposed to subtle threshold variation). The most significant is that people with central field loss are likely to use a preferred retinal locus (PRL) so the test grid will not necessarily be centred on the fovea (see Chapter 13 ). They also typically have far poorer fixation stability, so the accuracy of each stimulus position will be low. This will reduce the sensitivity, repeatability and accuracy of the visual field test ( ). Even with specific targets (four pericentral spots, or a diamond) designed to aid fixation, and careful instruction of the patient, there is no guarantee that fixation is stable and central ( )


Traditional kinetic perimetry using a 2- or 3-mm white target on a Bjerrum (tangent) screen viewed from 1 m is considered to be accurate ( ), although extremely time-consuming, and demanding for the patient. Fixation again cannot be easily controlled, although an experienced clinician may be able to observe fixation changes, or notice if the plotted blind spot is not in its expected location.


The most accurate way to measure the central visual field is to use a microperimeter, although these instruments remain relatively uncommon in clinical practice. Microperimetry was originally a term used to describe perimetry using extremely small targets, which could identify the scotomas caused by the retinal blood vessels in normal eyes. Nowadays it is the term used for a technique which would be better described as ‘fundus perimetry’ or ‘fundus-related perimetry’ ( ). In this technique, the stimulus presented to the patient is simultaneously seen imaged on the fundus, and so its location can be clearly identified, and does not depend on the patient’s fixation. The first commercial instrument to be used in this way was the Rodenstock Scanning Laser Ophthalmoscope (SLO), although linking the successive target presentations/fundus images together had to be performed manually. Several microperimeters are now available which carry out this process automatically, each with advantages and disadvantages, including the MP-1, MP-3 and MAIA devices. If the instrument is to be used to investigate severe central field loss, it must have the ability to present very bright targets: if very clear fundus images are required this is likely to require a scanning laser rather than camera-based system ( Fig. 3.10 ). A microperimeter can also be used to identify the location and stability of the PRLs for an individual with a central scotoma, and this may be helpful in eccentric viewing (EV) training (see Chapter 13 ). Microperimetry is time-consuming, requires a skilled operator, and can only be performed monocularly, reducing its relevance to everyday life under binocular conditions. In contrast, nonautomated tests such as kinetic boundary perimetry can all be performed quickly, easily and binocularly.


Tags:
Jul 15, 2023 | Posted by in OPHTHALMOLOGY | Comments Off on Clinical Measures of Visual Performance

Full access? Get Clinical Tree

Get Clinical Tree app for offline access