Binocular Vision





Introduction



“No question relating to vision has been so much debated as the cause of the single appearance of objects seen by both eyes” (Charles Wheatstone, 1838 )


The question has been discussed at least since Aristotle (384–322 BC) and subject to empirical investigation since Ptolemy (ca. 100–170). For a scholarly discussion of the origin of the many of the terms encountered in this chapter on binocular vision, see Wade (2021).


Humans and many other animals with frontally located eyes achieve binocular vision from the two retinal images through a series of sensory and motor processes that result in our rich percept of single objects and stereoscopic (three-dimensional [3D]) depth. Achieving “normal” single binocular vision and keen stereoscopic depth perception requires:



  • 1.

    Accurate central monocular fixation with each eye.


  • 2.

    Accurate simultaneous binocular fixation.


  • 3.

    Integrated neuromuscular activity of both the intra- and extraocular muscles.


  • 4.

    A sensory correspondence system organized about the two foveas.


  • 5.

    Similarity of the final ocular images from each eye (i.e., similar size, acuity, and contrast sensitivity).


  • 6.

    A neural mechanism to combine the images from the two eyes to compute binocular disparity and estimate stereoscopic depth intervals by scaling disparity with distance cues.



“Normal” binocular vision represents a highly coordinated organization of sensory and motor processes. Failure of any component may result in compromised binocular vision and stereopsis. Indeed, the evaluation of binocular vision is clinically very important because it tells us about the integrity of the underlying visual processes. Aside from refractive error, binocular vision anomalies are among the most common problems encountered in optometry and ophthalmology. About 7% of the population is stereoblind ; as much as 30% of the population may have poor stereopsis ; approximately 3% to 5% has strabismus; approximately 3% has amblyopia; and many more have high phorias, convergence insufficiency, etc. The topics that follow are intended to provide the foundation for the clinical science of binocular vision.


Why two eyes?


Perhaps most fundamentally, having two eyes confers the same evolutionary advantage as having two lungs or two ears: redundancy—it is good to have spare parts in case of loss. And having two eyes, like having two ears, provides for facial symmetry. A second functional advantage is that two eyes enable you to see more of the world. Each stationary eye alone can see about 150 degrees from side to side, but the binocular field of view is about 190 degrees, with a central 120 degrees of overlap of the monocular visual fields. Moreover, image distortions or scotomas in one eye, such as the blind spot, can be masked by a normal image in the other eye. Third, in normal vision, visual sensitivity for near-threshold stimuli is greater with two eyes than with one (by about a factor of 1.5), through the process of binocular summation. Indeed, it has been suggested that binocular summation may have provided the evolutionary pressure that first moved eyes toward the front of the faces of some birds and mammals. This advantage is reduced at suprathreshold levels by gain control mechanisms. Finally, two frontal eyes enable stereopsis, the vivid impression of three-dimensionality—of objects “popping out in depth”—that most humans get when viewing real-world objects with both eyes. This is based on binocular disparity—the differences between the two retinal images of the same world that are not available with purely monocular vision. At its most elementary level, acute stereopsis benefits both the predator, at the point of attack, and the prey, by unmasking the camouflage of predators.


Mapping the two eye’s images into a single percept


Given that we see a single, unified world, intuitively it makes sense that information from the two eyes should be brought together at some point. However, until Hubel and Wiesel’s Nobel prize–winning discovery that this convergence is first evident in the primary visual cortex, where most neurons can be influenced by input from both the left and right eyes (that is, they are binocular), there were strong arguments about whether the information converged at all, and if so, whether it was in a specialized “fusion center” in the brain—a notion that dates back to Descartes (1664; see ref ).


A full description of the neural basis of binocular vision is beyond the scope of this chapter. In brief, the story begins with the semidecussation of the optic fibers at the optic chiasm. Axons from the retinal ganglion cells (RGCs) form the optic nerves of each eye. When they reach the chiasm, fibers from the nasal half of each retina cross to the opposite side of the brain; axons from the temporal half do not cross. Axons from the left half of each retina end up in the left lateral geniculate nucleus (LGN), while the axons from the right end up in the right LGN. Thus, the right LGN receives projections from the left visual field, and vice versa. Each LGN layer contains a highly ordered “map” of half of the visual field. This “topographic” mapping provides us with a neural basis for knowing where things are in space. Excitatory inputs from the two eyes do not converge onto a binocular map in the LGN (because each eye’s map is in a different layer), and there is limited functional binocular convergence of the eye-specific inputs in the retinogeniculate pathway. However, information from the same part of the visual field is mapped to corresponding regions in adjacent layers. This binocular convergence does not happen until the primary visual cortex, where most neurons can be influenced by input from both the left and right eyes—that is, they are binocular. A binocular neuron has two receptive fields, one in each eye. In binocular primary visual cortex neurons, the receptive fields in the two eyes are generally quite similar, sharing nearly identical orientation and spatial-frequency tuning, as well as the same preferred speed and direction of motion. Thus, these cells are well suited to the task of matching images in the two eyes. This is a remarkable transformation, because up to this point information from the two eyes is separated. However, this changes dramatically in striate cortex, where a majority of cells can be influenced through both the left eye and the right eye.


Many binocular neurons respond best when the retinal images are on corresponding points in the two retinas, that is, the two eyes’ images appear in identical visual directions (i.e., they appear in the same spatial location), thereby providing a neural basis for the horopter, which is normally located close to the plane of fixation and where objects appear single. However, many other binocular neurons respond best when similar images occupy slightly different or disparate positions on the retinas of the two eyes or when images fall on receptive fields with different spatial phases. In other words, these neurons are tuned to a particular binocular disparity. Binocular neurons in V1 may provide a neural substrate for detecting absolute disparity, i.e., the depth-distance from the horopter. Other cortical areas (e.g., V2, V4) may be the substrate for computing relative disparity, that is, depth discrimination (discussed later).


Visual direction


Locating objects in 3D space requires knowing the relationship between the locations of objects in physical space and their perceived (subjective) spatial locations, as illustrated in Fig. 36.1 . This simple figure illustrates several important concepts. The perceived visual directions of objects depends on the retinal locations of their images. Each retinal image location is associated with a specific visual direction, called its local sign or oculocentric direction. The primary visual direction is the oculocentric direction of an object that is fixated along the primary line of sight and is imaged on the center of the fovea (represented by the diamond in Fig. 36.1 ). Secondary lines of sight are the oculocentric directions of other retinal image locations relative to the primary visual direction. As illustrated in Fig. 36.1 , the perceived visual directions of nonfixated objects (represented by the circle and square ) are relative to the visual direction of a fixated object (represented by the diamond ). In the example, an image located to the left side of the fovea is perceived to the right of the fixated object, (e.g., the visual direction of the square). The normal ability to distinguish differences in oculocentric directions is extremely accurate, as demonstrated by Vernier alignment thresholds, which are in the order of 6 arc seconds for foveal viewing (see Chapter 33 ).




Fig. 36.1


Perceived visual direction as a function of retinal image location. ( A ) The relationships between objects in physical space and their retinal images. The square represents a fixated object (primary line of sight) and the nonfixated objects ( circle and square ) are secondary lines of sight. ( B ) The relationships between retinal image locations and their visual directions (oculocentric directions). The perceived visual directions of the nonfixated objects (secondary visual directions of the circle and square) are relative to the primary visual direction of the fixated diamond. fv ,


However, retinal image location alone is not sufficient to define the relationship between the physical and perceived locations of objects in space. The perceived direction of an object in space relative to the body, the egocentric direction, is derived from the combination of its oculocentric visual direction with information about version eye orientation in the head (i.e., direction of gaze) and head position relative to the trunk of the body. Egocentric localization is referenced to an egocenter, which is generally located at a point midway between the two eyes where a cyclopean eye represents binocular visual directions. The retina of the cyclopean eye is a metaphorical representation of a binocular map of oculocentric directions computed in the visual cortex. The gaze direction of the cyclopean eye equals the version gaze position of the two eyes. Projected visual directions from the cyclopean retina are egocentric (i.e., they are the combination of oculocentric direction and version eye position). The combined information of retinal position and versional eye position is critical to distinguishing between retinal image motion caused by eye movements and movement of the object.


Fig. 36.2 illustrates the concept of egocentric direction: The retinal oculocentric directions from the two retinal images of a single object at ( Fig. 36.2A ) or close to ( Fig. 36.2B , circle) the plane of fixation are combined to produce a single direction (haplopia) relative to the observer’s egocenter. Note that in this example, the binocular egocentric direction deviates slightly from either of the monocularly perceived directions by one-half of the angular disparity. If the retinal images have unequal contrast or luminance, the perceived direction will be biased or weighted toward the direction of the higher-contrast image.




Fig. 36.2


Egocentric directions for haplopia and diplopia. ( A ) The oculocentric directions of a fixated object are combined to produce a single egocentric direction. ( B ) The retinal images of nonfixated objects that are near the plane of fixation also will be combined to produce a single egocentric direction. ( C ) When the disparity of retinal images is larger than the normal fusion range (Panum’s fusional area), the oculocentric directions are not combined and the object has two egocentric directions (physiologic diplopia). fv, Fovea; OD, right eye; OS, left eye; OU, both eyes.


In contrast, for objects that are outside the range of single vision, either nearer or farther than the plane of fixation, a single object has two separate egocentric directions that correspond to two separate positions (i.e., oculocentric directions) on the cyclopean retina. Different directions are stimulated when the two eye’s retinal images are formed on noncorresponding retinal locations. When combined with version eye position information, they are perceived in separate egocentric directions (i.e., diplopia, Fig. 36.2C ). The egocentric directions of the diplopic images are perceived as though both monocular components had paired images on corresponding points in the contralateral eye ( Fig. 36.2C ). The relative egocentric direction of each of the diplopic images depends on whether the object is nearer or farther than the plane of fixation. For an object closer than the plane of fixation, the egocentric direction associated with the retinal image of the right eye is to the left of the egocentric direction produced by the image of the left eye. This form of diplopia, produced by near objects, is called “crossed” diplopia due to the right-left/left-right relationship between the eyes and perceived locations of the object. Conversely, objects farther than the plane of fixation will be perceived with “uncrossed” diplopia, where the right eye sees an object located on the right side and the left eye sees an object located on the left side.


In normal binocular vision, diplopia of nonfixated objects lying nearer or farther than the plane of fixation, called physiologic diplopia, occurs as a natural consequence of the lateral separation of frontally located eyes and the topographical organization of binocularly corresponding visual directions across the retina. In contrast, patients with a late-onset strabismus may experience pathologic diplopia, wherein objects in the plane of fixation are perceived as doubled. In pathologic diplopia, the retinal image location caused by misalignment of the visual axis of the deviating eye causes a fixated target to be seen in different egocentric directions by the two eyes during binocular viewing. In addition to diplopia, an object imaged on the fovea of the deviating eye may have the same egocentric direction as another object imaged on the fovea of the fixating eye, resulting in visual confusion. Because diplopia and visual confusion may create an existential threat, most strabismic patients adapt by suppressing input from the deviating eye, and/or by developing anomalous retinal correspondence (discussed further in the chapter).


Binocular eye movements


Our eyes are in constant motion, making versions (conjugate changes in the visual axes of the two eyes in the same direction and magnitude) and vergences (rotations of the visual axes in opposite directions) as we look at and track objects in the world (see Chapter 9 ). However, in normal binocular vision, only versions influence the perception of egocentric direction, which is sensed as the average position of the two eyes. If vergence movements also influenced perception of egocentric directions, then vergence would produce disparities between visual directions of foveal images and would be in conflict with bifoveal retinal correspondence. Because vergence movements of the two eyes are in opposite directions, they do not change the computation of the average position of the two eyes. Thus, when the two eyes fixate objects to the left or right of the midline in asymmetric convergence, only the version, or conjugate, component of eye position contributes to perceived direction.


Normal retinal correspondence and the horopter


Fig. 36.2 illustrates the relationship between retinal image locations and perceived visual directions. The two foveal images of a fixated object ( Fig. 36.2A diamond) share a common visual direction and are fused to produce the percept of a single haplopic object in subjective space ( Fig. 36.2B ). Points in the two retinae, that when stimulated result in the perception of the same visual direction, are defined as corresponding points. The foveas represent an important pair of corresponding retinal points, but there are many other pairs associated with secondary lines of sight and visual directions. For example, in Fig. 36.3 the square represents the location of an object in the right visual field that falls on paired retinal areas with a common visual direction for the two eyes, thus defining another set of corresponding retinal areas. Similarly, the circle in the left visual field represents a location of an object in the left visual field that is imaged on corresponding retinal points. Identical visual directions associated with corresponding points depend primarily on retinal image locations and their associated oculocentric directions. For a given fixation distance, the location of objects (in space) whose images lie on corresponding points in the two eyes define a surface called the horopter. In effect, the horopter is a map in space where binocularly viewed objects appear single.




Fig. 36.3


Corresponding retinal points. Retinal areas in the two eyes with identical oculocentric directions ( B ) are corresponding retinal points. The locations of objects along the horizontal meridian that are imaged on corresponding points ( A ) define the longitudinal horopter. The physical location of an object that is on the horopter is quantified by the longitudinal visual angles (α 1 and α 2 ). fv , fovea.


The longitudinal (horizontal) horopter is illustrated by the black curved line in Fig. 36.3A . The shape of the horopter depends on the viewing distance ( Fig. 36.4 ), as well as physiologic and optical factors that affect the retinal images and their cortical representations. The abathic distance is the distance at which the empirical horopter is a straight line (matching the apparent frontoparallel plane or AFPP). If the locations of corresponding points were simply determined by equal angular distances from the primary line of sight, then the horopter would be a circle passing through the fixation point and the entrance pupil of each eye (geometric or theoretical horopter or Vieth-Muller [V-M] circle— Fig. 36.4 , black circles ). Curvature changes in the horopter with viewing distance result from increases in the radius of the horopter circle (V-M circle) as viewing distance increases. Note, however, the measured (empirical) longitudinal horopter rarely coincides with the V-M circle. The horopter only equals the V-M circle when corresponding points are equidistant from their respective foveas. The deviation of the empirical horopter from the geometric horopter (the Hering-Hillebrand deviation) has been taken as evidence of an asymmetry between the locations of corresponding points on the nasal and temporal retinas. Compression of points on the nasal retinae compared with the temporal cause the horopter to be more curved, and compression on the temporal retinae cause it to be less curved than the V-M circle.




Fig. 36.4


The empirical horopter ( thick solid line ) in relation to the Vieth-Muller circle (VMO; circles ), defined by the locations of objects with equal longitudinal visual angles (α 1 and α 2 ), and the objective frontoparallel plane (OFPP; dot-dot-dash line ), defined by a plane at the fixation distance that is parallel to a line passing through the entrance pupils of the two eyes.


Pairs of retinal points with identical visual directions in the two eyes also define a vertical horopter (i.e., locations of object points in the midsagittal plane that are imaged on corresponding points along the vertical meridian; Fig. 36.5 ). . The empirical vertical horopter is pitched top-back, with the degree of pitch increasing with viewing distance. Fig. 36.6 shows both the horizontal and vertical empirical horopters ( Fig. 36.6A ) measured at different viewing distances. While fixating points at different distances along the ground plane, the declination or backward pitch of the vertical horopter expands the range of single vision by making objects along the ground plane appear fused during any single fixation distance.




Fig. 36.5


( A ) The geometric vertical horopter. The anatomical vertical meridians of the eyes are geometric corresponding points. When these points are projected into the world, they intersect at a vertical line in the head’s midsagittal plane–the geometric vertical horopter. ( B ) The empirical vertical horopter has crossed disparity below fixation and uncrossed disparity above fixation, causing a top-back pitch.

From Cooper EA, Burge J, Banks MS. The vertical horopter is not adaptable, but it may be adaptive. J Vis . 2011;11(3).



Fig. 36.6


Binocular horopter. ( A ) Binocular horopter along the vertical and horizontal meridians for different fixation distances (50, 100, and 200 cm) along the vertical ( top ) and horizontal ( bottom ) meridians. ( B ) Binocular horopter across the central visual field. Fixation distance was 100 cm. The abscissa and ordinate indicate azimuth and elevation, respectively. Median horizontal disparity for corresponding points is indicated by color: darker colors represent greater uncrossed disparity. White curve indicates where disparity changes sign from crossed to uncrossed. Red dots represent the tested field positions.

From Gibaldi A, Banks MS. Binocular eye movements are adapted to the natural environment. J Neurosci 2019;39(15):2877–2888.


Disparities in the natural environment


The heatmap ( Fig. 36.6B ) shows the distribution of binocular disparities across the central visual field for a fixed viewing distance (100 cm), with darker colors representing greater uncrossed disparity (Fig. 8 from ). It is interesting to note that the upper visual field tends to have uncrossed (far) disparities, whereas the lower field has more crossed (near) disparities, closely matching the distribution of retinal disparities encountered in the natural environment ( Fig. 36.6C ) (G & B 19 fig. 9B); indeed, the binocular sensory and motor systems are nicely adapted to the statistics of the natural environment.


Panum’s fusional area


The retinal images of objects in the world need not fall on precisely corresponding points in the two retinae to be perceived as single. The horopter represents the specific locations of objects the world that have zero retinal image disparity. For individuals with normal binocular vision there is a small range of distances (both behind and in front of the horopter), where objects are seen single. This range, illustrated by the shaded areas in Fig. 36.4 , is Panum’s fusional area. Objects within Panum’s area have some finite amount of binocular (retinal) disparity. However, the size and shape of Panum’s area depends both on the eccentricity of the objects (the larger the eccentricity, the larger Panum’s area) and the spatial and temporal properties of the targets—smallest for small, rapidly varying (high spatial and temporal frequency) targets and larger for large, slowly varying (low spatial and temporal frequency) targets. Panum’s area allows for some useful vergence errors that assist vergence stability in binocular fixation without the penalty of diplopia. Indeed, persons with normal binocular vision often manifest a fixation disparity, a small error in convergence under conditions of fusion—that is, while maintaining single binocular vision. Fixation disparity is generally not measured in the clinic; however, the amount of prism required to eliminate the fixation disparity (known as the associated phoria) is used to prescribe prism to relieve the asthenopia that can occur in patients with large heterophorias (see Box 36.1 ).



BOX 36.1

Asthenopia with uncorrected astigmatism and oculomotor imbalance





  • A 19-year-old college student



  • Visual complaints: Eye strain (burning and a “pulling” sensation) that she associates with reading and her other near work. Near-sighted, has worn glasses for 5 years and her vision is OK with a spectacle correction.



  • Current spectacle correction and Vas




    • OD: −2.00 −0.50 × 075; 20/25



    • OS: −2.00 −0.25 × 090; 20/30




  • Refractive errors and VAs




    • OD: −2.25 −0.75 × 070; 20/15



    • OS: −2.25 −0.75 × 105; 20/15




  • Phorometric findings




    • Distance heterohoria: 4 Δ E(P)



    • Near phoria: 6 Δ E(P)




  • Ductions at near: base-in 7 Δ /12 Δ /4 Δ , base out 9 Δ /14 Δ /7 Δ




    • Near associated phoria: 3 Δ base out.




Comment: The patient’s asthenopia could be the result of astigmatism or oculomotor imbalance from an esofixation disparity caused by esophoria.



Binocular combination and binocular suppression in normal vision


The superiority of binocular over monocular viewing for detecting near-threshold isolated targets, about a factor of 1.5, has been documented and quantified in hundreds of studies (reviewed in ref ). We still do not have a full understanding of how the inputs to the two eyes are combined; however, to account for the complexity of binocular interactions under a broad range of different stimuli and tasks (including contrast detection, discrimination, and matching, as well as phase discrimination), almost all recent models of binocular combination incorporate dynamic gain control. Dynamic gain control in the visual system is somewhat akin to automatic gain control (AGC) of image contrast in cameras. Separate AGCs for each eye serve to equalize the perceived contrast of the two eye images. Importantly, this class of models can predict the effects of having different stimulus contrast in the two eyes. When the stimuli in the two eyes differ substantially in contrast, the “winner takes all” (i.e., the eye with the higher contrast dominates). Indeed, a complex gain control model with flank-to-target and target-to-flank interactions ( Fig. 36.7 shows the structure of this model) also predicts the effects of nearby flanking contours on monocular and binocular contrast detection, discrimination, and matching, and the complete failure of binocular summation when the separation of the flanking contours is too small.




Fig. 36.7


A binocular gain control model with target-flanker interactions.

From Lev M, Ding J, Polat U, Levi DM. Nearby contours abolish the binocular advantage. Sci Rep 2021;11(1):16920. https://doi.org/10.1038/s41598-021-96053-9 .


When the retinal images have significant differences in defocus or contrast interocular blur, suppression may occur. A wide range of conditions can produce unequal contrast of the two ocular images, including anisometropia, unequal amplitudes of accommodation, and asymmetric convergence on targets that are closer to one eye than to the other. Interocular blur suppression is requisite for adaptation to a monovision correction of presbyopia (in which one eye is corrected for distance viewing and the other for near). Monovision suppression allows clear, nonblurred, binocular percepts and retention of stereopsis, albeit with the stereo-threshold elevated by approximately twofold.


However, when the stimuli to the two eyes differ substantially, a form of suppression known as binocular rivalry takes place. The classic example of binocular rivalry occurs when nonfusible images such as orthogonally oriented gratings, for example horizontal in one eye and vertical in the other, are presented to corresponding retinal areas in the two eyes. Under these conditions, only one set of gratings (say horizontal) will be seen and, after several seconds, the image of the other set of gratings (say vertical) will appear to wash over the first (wholesale suppression). At other times, the two monocular images become fragmented, and small interwoven retinal patches from each eye alternate independently of one another. In this case, piecemeal suppression is regional, localized to the vicinity of the contour intersections.


The rivalrous patches appear to alternate between the two images approximately once every 4 seconds, but the rate of oscillation and its duty cycle vary with the degree of difference between the two ocular images and with the stimulus strength. The rate of rivalry increases as the orientation difference between the targets increases beyond 22 degrees, indicating that rivalry is not likely to occur within the tuned range for cortical orientation columns. Levelt formulated a series of rules describing how the dominance and suppression phases of binocular rivalry vary with the strength of stimulus variables such as brightness, contrast, and motion. He concluded that the duration of the suppression phase of one of the retinal images decreases when its stimulus visibility or strength is increased relative to the retinal image of the fellow eye. If the strengths of the stimuli for both eyes are increased, then the suppression phase for each eye decreases, and the oscillation rate of rivalry increases. Reducing the stimulus contrast of both eyes also reduces the oscillation rate until, at a very low level of contrast, rivalry is replaced with a binocular summation of nonfusible targets.


The perception of rivalry has a latency of approximately 200 ms so that briefly presented nonfusible patterns appear superimposed. However, rivalry occurs between dichoptic patterns that are alternated rapidly at 7 Hz or faster, indicating an integration time of at least 150 ms. When rivalrous and fusible stimuli are presented simultaneously, fusion takes precedence over rivalry, and the onset of a fusible target can terminate suppression, although the fusion mechanism takes time (150–200 ms) to become fully operational. Suppression and stereo-fusion appear to be mutually exclusive outcomes of binocular rivalry stimuli presented at a given retinal location, but stereoscopic depth and rivalry can be observed simultaneously when fusible contours are superimposed on rivalrous backgrounds.


The classic demonstrations of rivalry pit a stimulus in one eye against a stimulus in the other eye. However, rivalry can also occur between fragments of images in the same eye, and it is now clear that rivalry is part of a larger effort by the visual system to deal with ambiguity and come up with the most likely version of the world, given the current retinal images.


Binocular rivalry is not the only form of suppression that takes place in normal binocular vision. Indeed, we rarely experience double vision in our everyday lives; however, as we view the world, much of the information lands on noncorresponding areas in the two eyes. Objects that are at some distance in front of or behind the plane of fixation (with binocular disparities that are well outside the limits of Panum’s fusional areas) are rarely diplopic under normal casual viewing conditions because of the suppression of one image. The suppression of physiologic diplopia is called suspension because, unlike the binocular rivalry discussed previously, this form of suppression does not alternate between the two images. Instead, only one image is continually suppressed, favoring visibility of the target imaged on the nasal hemiretina. However, calling attention to the disparate target can evoke physiologic diplopia.


More profound forms of suppression known as permanent suppression and continuous flash suppression (CFS) can also occur in normal vision. Permanent suppression occurs when one eye views a contoured stimulus while the other eye views a spatially homogeneous field. Under these conditions, the image of the eye viewing the contoured field dominates, while the image of the eye viewing the homogeneous field is almost continually suppressed. Permanent suppression occurs for the normal blind spot, which appears filled in, even under monocular viewing conditions. Because of the stability of the dominance/suppression percept under these viewing conditions, the term permanent suppression is used to distinguish this type of suppression from binocular rivalry suppression. You can experience a powerful demonstration of permanent suppression by holding a cylindrical tube in front of your right eye and placing the palm of your left hand in front of the left eye near the far end of the tube. The combined stable percept is that of a hole in the left hand. The hand is seen as the region surrounding the aperture through which the background is viewed. This ecologically valid example of occlusion gives priority to the background seen through the aperture.


A similar profound suppression of one eye takes place when a rapidly changing sequence of high-contrast, contour-rich patterns is presented to one eye and a static stimulus is presented to the other eye. Under these conditions, the static stimulus is suppressed from awareness. CFS appears to selectively suppress low spatial frequency and cardinally oriented features in the image.


Binocular (retinal) disparity and depth perception


Objects that are not on the horopter will have some amount of binocular retinal disparity. Objects with horizontal retinal disparity are imaged on laterally separated noncorresponding retinal areas and, as in the case of the empirical horopter, the longitudinal visual angles (α 1 and α 2, the angles subtended by the primary and secondary lines of sight— Fig. 36.3 ) are unequal. Horizontal binocular disparity is a unique binocular stimulus for stereoscopic depth perception, horizontal disparity vergence eye movements, and horizontal diplopia, either physiologic or pathologic. In each case, the perceptual or motor response is a consequence of the relationship between the object’s disparate images and the horopter. Crossed disparities give rise to a perception of “near” stereoscopic depth or crossed diplopia, and elicit convergence eye movements, while uncrossed disparities give rise to a sense of relative “far” stereoscopic depth or uncrossed diplopia and divergence eye movements.


Absolute and relative disparity


The absolute disparity of an object is the difference between the angle subtended by the target at the two entrance pupils of the eyes and the angle of convergence. Absolute disparity is the optimal cue or stimulus for vergence eye movements. The square in Fig. 36.8 is farther away than the diamond (where the observer is fixating), the secondary lines of sight for the square intersect behind the horopter, which coincides with the angle of convergence. The absolute disparity produced by the square is uncrossed (it can be quantified by the difference between the longitudinal visual angles α 1 and α 2 ), and uncrossed disparities (outside of Panum’s fusional area) evoke divergence eye movements. Uncrossed disparities may also give rise to the percept of relative “far” stereoscopic depth or, if the disparity is large, to uncrossed diplopia (i.e., double vision with the right eye’s image seen to the right of the left eye’s image). Crossed disparities give rise to the percept of relative “near” stereoscopic depth or, if the disparity is large, to crossed diplopia (i.e., double vision with the right eye’s image seen to the left of the left eye’s image), and uncrossed disparities (outside of Panum’s fusional area) evoke convergence eye movements.




Fig. 36.8


Binocular disparity and the perception of stereoscopic depth. Objects that are not located on the longitudinal horopter have binocular disparity, which may be crossed disparity ( circle in panel A ) and be perceived as nearer than the fixated object (the relative depth of the circle with respect to the diamond in panel B ) or uncrossed disparity ( square in A ) and be perceived as farther than the fixated reference object ( B ). fv, fovea; VMO , Vieth Muller Circle.


The judgment of relative depth (for instance, the distance of square relative to the diamond in Fig. 36.8 ) is based on relative disparity (i.e., the difference between the absolute disparities of two objects). Horizontal relative disparity provides the cue for high-resolution stereopsis or stereo-depth discrimination. Humans are an order-of-magnitude more sensitive to relative than to absolute disparity.


Stereoacuity


In normal foveal vision under optimal conditions, stereoacuity (the smallest detectable depth difference, that is, the difference in the longitudinal visual angles (α 1 and α 2 in Fig. 36.8 ) is a “hyperacuity” (Westheimer), with thresholds as low as 3 arc seconds! These hyperacuity thresholds are smaller than the width of a single photo receptor. Of course, normal stereoacuity depends on a number of stimulus conditions, including contrast, spatial frequency, and viewing duration, as well as the degree of the similarity between the stimuli to each eye. For example, optical defocus of the retinal images due to uncorrected refractive errors degrades stereoacuity in proportion to the magnitude of defocus. The effect of blurring the retinal images is greater for stereoacuity than for other resolution tasks, such as Vernier acuity, visual acuity, and instantaneous displacement thresholds.


Surprisingly, unilateral optical defocus produces a greater reduction of stereoacuity than does symmetrical bilateral defocus, consistent with other paradoxical effects on stereoacuity caused by interocular differences in stimulus parameters. For example, improving the focus in one eye will, paradoxically, degrade stereopsis more than equal defocus in both eyes (stereo-contrast paradox). With unilateral defocus, stereoacuity decreases as the amount of defocus increases until stereopsis is suspended by interocular differences greater than about two diopters. The filtering of high spatial frequencies that occurs in defocused images cannot fully explain the resulting reduction in stereoacuity, and foveal suppression may also play a role.


Because of the effect of optical defocus on stereoacuity, usually a clinician’s primary goal is to provide an accurate refractive correction for each eye to optimize binocular vision and stereoacuity. However, for one group of patients—presbyopes—interocular refractive error corrections are often purposely unbalanced to correct one eye for distance vision and the other eye for near vision. The scheme of providing an ocular correction for distance vision with one eye and a near vision correction for the other eye, either with single vision contact lenses, refractive surgery, or intraocular lens implants, is called monovision. The monovision procedure appears to be well tolerated by many (but by no means all) patients with normal binocular vision, and patients seem to experience less disorientation during perceptual adaptation when the dominant eye is corrected for distance vision. Following a period of adaptation, most patients do not experience symptoms of asthenopia, nor are they aware of blurred vision at far or near viewing distances. Although their binocular field of view and motor fusion ranges are generally normal, many monovision patients will experience diplopia at night or in dim illumination (when their pupils are dilated), or when they view bright targets of high contrast. Some visual functions do not adapt to unilateral optical defocus, resulting in a loss of binocular summation for high spatial frequencies, reduced binocular visual acuity, and degraded stereoacuity with foveal suppression.


However, a recently discovered unwanted effect of monovision correction is the misperception of the distance of moving objects, which may pose a potential public safety hazard. What Burge and colleagues discovered is that the effect of having one retinal image blurred and the other clear can introduce a motion illusion because the two images are processed at different speeds. The blurred image is actually processed faster than the clear image (because blur removes the high spatial frequency information, which is processed more slowly than the remaining low spatial frequency information). The difference in processing speeds results in a dramatic illusion in the perceived depth of moving targets, analogous to the well-known Pulfrich effect (described next), but in reverse. Indeed, they calculate that a monovision correction that induces a +1.5 D difference in the two retinal images will result in an overestimation of the distance a target moving at 15 miles per hour by more than 9 feet (2.8 m, illustrated in Fig. 36.9 ).




Fig. 36.9


The effect of a monovision lens on perceived distance. The interocular blur differences caused by a monovision correction can cause dramatic misperceptions of motion.

From Burge J, Rodriguez-Lopez V, & Dorronsoro C. Monovision and the misperception of motion. Curr Biol 2019;29(15):2586–2592, e2584. https://doi.org/10.1016/j.cub.2019.06.070 .


It has been known for about a century that interocular differences in the retinal illuminance or contrast result in a misperception of the depth of moving objects, known as the Pulfrich effect, because reduced illuminance or contrast in one eye slows the processing of motion, causing a delay relative to the other eye and introducing a neural disparity. Depth distortion with the Pulfrich effect increases with target velocity.


Stereoacuity also provides a very important clinical assay for the integrity of binocular vision. Normal stereoscopic vision is experience dependent and requires normal binocular visual experience over a period of early childhood (see Chapter 38 ). Moreover, stereopsis may also be impacted by in older patients by disease (e.g., age-related macular degeneration), so clinical testing of stereopsis is important to detect binocular abnormalities in children or adults, and most of the clinical tests are designed as screening tools for distinguishing between normal and abnormal binocular vision. It is important to note that there are a wide range of clinical stereoacuity tests; however, clinicians should be aware that many of these contain either monocular cues or nonstereoscopic binocular cues that can enable patients to “pass” the test despite being stereo blind or stereo deficient (see Box 36.2 ).



BOX 36.2

Reduced stereoacuity associated with primary microstrabismus





  • College student, diagnosed at age 22 years



  • No surgery or orthoptic treatment




    • Cover test: Constant 6 Δ Esotropia (ET) O.S. at distance and near




  • Refractive errors and Visual Acuitys (VAs):




    • OD: Plano 20/16



    • OS: +0.25 –0.50 × 180; 20/25




  • Lateral heterophoria at distance with Maddox Rod—normal retinal correspondence.



  • Bagolini striated lens test: at near. Lines intersect at the fixation point—harmonious anomalous retinal correspondence.



  • No stereopsis with the Randot circles test.



Comment: The patient has primary microstrabismus with central harmonious retinal correspondence and reduced stereoacuity, but with normal peripheral retinal correspondence.



Fig. 36.10 shows examples of two classes of stereograms used for testing stereopsis. The top stereogram is an example of a contour stereogram that tests “local” stereopsis: it consists of high-contrast test figures having unambiguous features, and relative disparities that can be detected by all patients with normal binocular vision. However, these stereograms contain monocular (i.e., local) cues that are discernable in targets with large disparities. When properly fused, the contours in this stereogram provide clear stereoscopic depth perception from binocular disparity, but the relative differences in the positions of the stimuli may be apparent without fusion or stereopsis.




Fig. 36.10


Examples of stereograms with contour-defined objects (local stereopsis) or disparity-defined objects (global stereopsis or cyclopean perception). The disparity patterns in the contour and random-dot stereograms are identical in size and disparity magnitude. Most individuals with normal binocular vision can learn to free-fuse the stereograms by either overconvergence or underconvergence and observe the stereoscopic depth in each example. However, it should be noted that the two viewing strategies produce opposite directions of relative depth.


The lower panels in Fig. 36.10 shows an example of a random-dot stereogram (RDS) that tests “global” stereopsis. While the locations, directions, and disparity magnitudes of the stimuli of the RDS and contour stereograms are identical, the pattern is more difficult to see in the RDS. Because the form information in RDSs is not available with one eye, that is, global (fusion and stereopsis are necessary to see the form), this form of stereopsis has also been called “cyclopean.” The correct identification of a disparity-defined form embedded into the RDS is definitive evidence that the patient has stereopsis. The global image is defined by its binocular disparity, and cyclopean stereoscopic depth reveals the global image. However, global stereopsis tests generally consist of small, dense, low-contrast elements, which make them especially challenging for patients with binocular anomalies (e.g., amblyopia), even though they may actually have the neural mechanisms necessary for global stereopsis.


Stereopsis and the “matching problem”


Each half of the RDS illustrated in Fig. 36.8 ( top ) contains 1000 by 1000 dots! This leads to the matching or correspondence problem in binocular vision (i.e., the problem of figuring out which bit of the image in the left eye should be matched with which bit in the right eye). The problem is particularly vexing in images like RDSs. Matching thousands of left-eye dots to thousands of right-eye dots in Fig. 36.8 would require a lot of work for any computational system. However, the visual system adopts a number of strategies to simplify the problem. These include blurring, which reduces the high spatial frequency information, and adopting certain heuristicws or constraints. The uniqueness constraint asserts that a wfeature in the world is represented exactly once in each retinal image (i.e., each monocular image feature should be paired with exactly one feature in the other monocular image). The continuity or smoothness constraint holds that, except at the edges of objects, neighboring points in the world lie at similar distances from the viewer. Accordingly, disparity should change smoothly at most places in the image. These (and possibly additional) constraints make the matching problem much more tractable by reducing the number of possible solutions. However, recent work suggests that identifying correct matches may not be the optimal strategy. Rather, they suggest that the brain uses “what not detectors” that sense dissimilar features in the two eyes, and suppress unlikely interpretations of the scene, thus facilitating stereopsis by providing evidence against interpretations that are incompatible with the true structure of the scene. Note that this scheme would mean that a large (perhaps infinite) number of matches are considered and rejected.


Retinal disparity, perceptual constancy, and perceived depth


The retinal disparity associated with the physical depth of an object can be determined by the relationship:


<SPAN role=presentation tabIndex=0 id=MathJax-Element-1-Frame class=MathJax style="POSITION: relative" data-mathml='η=[(a∗Δb/d2)]∗c’>𝜂=[(𝑎𝛥𝑏/𝑑2)]𝑐η=[(a∗Δb/d2)]∗c
η=[(a∗Δb/d2)]∗c

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 29, 2024 | Posted by in OPHTHALMOLOGY | Comments Off on Binocular Vision

Full access? Get Clinical Tree

Get Clinical Tree app for offline access