The Effects of Visual Deprivation After Infancy


Over the last three decades it has been well documented that visual deprivation, especially when it occurs early in life, results in fundamental alterations of neural function across more than 25% of cortex—changes that range from metabolism to behavior and collectively represent one of the most dramatic examples of plasticity in the human brain. As schematized in Fig. 41.1 , blindness has massive impact across multiple neuroanatomical levels, from molecules to function.

Fig. 41.1

Early blindness changes brain organization across multiple scales. These alterations include neurotransmitter regulation, local synaptic connectivity, neural network organization, white matter pathways, cortical specification, and behavioral abilities, as discussed in the main text.

“Blindness” is an extremely broad term, with a variety of definitions that vary across common usage and the legal, medical, rehabilitation, and scientific literatures ( Box 41.1 ). In this review we primarily focus on early congenital or late severe (light perception [LP] or worse) blindness because these forms of blindness are the most heavily studied. Scientists studying the effects of early blindness have generally focused on relatively homogenous groups of individuals with LP or no LP (NLP) vision occurring either at or within a year of birth. Scientists studying late blindness have, similarly, often focused on groups of individuals with a normal visual history until adulthood, followed by NLP and LP. However, these groups represent a relatively small proportion of blind individuals. Most blind individuals have significant residual vision, and many have a complex medical history that involves worsening vision over many years ( Fig. 41.2 ).

BOX 41.1


  • Legally blind (US definition): Vision worse than 20/200, or a field of view smaller than 20 degrees in the better eye.

  • Finger counting: An individual can tell how many fingers the ophthalmologist is holding up.

  • Hand motion: An individual can tell that the ophthalmologist is waving a hand in front of their eyes.

  • Light perception (LP): An individual can tell if the lights in a room are on or off (roughly similar to a normally sighted individual’s vision with their eyes closed).

  • No light perception (NLP): An individual cannot tell whether the lights in a room are on or off.

Fig. 41.2

Blindness is a broad term, characterizing a wide range of visual impairments. ( A ) Original picture, representing a 20-degree field of view. ( B ) The upper limit of the legal definition of blindness. 20/200 Snellen acuity, simulated by removing frequencies above 3 cycles/degree (cpd). ( C ) The upper limit of finger-counting vision. 20/1000 Snellen acuity, simulated by removing frequencies above 0.6 cpd. ( D ) The upper limit of hand motion vision. 20/2500 Snellen acuity simulated by removing frequencies above 0.24 cpd. ( E ) The upper limit of light perception vision. 20/6060 Snellen acuity, simulated by removing frequencies above 0.1 cpd.)

The conversion from “finger counting” and “hand motion” to Snellen acuity based on Schulze-Bonsel K, Feltgen N, Burau H, Hansen L, Bach M. Visual acuities “hand motion” and “counting fingers” can be quantified with the freiburg visual acuity test. Invest Ophthalmol Vis Sci . 2006;47(3):1236–124; Photo from Sarah. (August 7, 2021). York, UK. .

Neuroanatomical development

Fig. 41.3 shows a schematic of the major milestones for white matter, neuronal, and biochemical development within visual pathways before and after eye opening.

Fig. 41.3

The time course of milestones of developmental plasticity, measured across a variety of species. We normalized postconception (PC) and postnatal (PN) days for each species to a timeline based on human development using the Workman translating time model. Prenatal events are reported as days PC; PN dates are shown as months PN. E/I , excitatory/inhibitory; LGN , lateral geniculate nucleus.

Reproduced with permission from the Annual Review of Vision Science, Volume 4 © 2018 by Annual Reviews, . Park WJ, Fine I. New insights into cortical development and plasticity: From molecules to behavior. Curr Opin Physiol . 2020;16:50–60. .

White matter pathways

In the normally developing brain, the early wave of migrating neurons that provides a first rough blueprint of cerebral organization is primarily governed by intrinsic signaling, controlled in part by graded expression of transcription factors. These pathways begin to form well before the onset of visual experience, beginning around postconception (PC) day 80. By PC day 200 the major visual white matter tracts linking occipital cortex to other regions of the brain and the connectivity patterns underlying the retinotopic topography of early visual areas (V1-V2-V3) have begun to develop, well before the axons from the optic radiations innervate cortex.

However, although major white matter pathways are established relatively early in development, these prenatal tracts are initially unmyelinated, with the elaboration of the myelin sheath around neuronal axons only beginning at birth. Myelination of posterior tracts is rapid in the first year or two of life, and continues throughout childhood.

Both human and animal models (for a review of the early literature, see ref ) find that early blindness leads to atrophy of the pathways from the retina to early visual cortex. Several animal models have shown atrophy of the connections between the eye and V1 after early-onset blindness. Similarly, a variety of studies in humans have consistently found decreased white matter volume, decreased axial diffusivity, and increased radial diffusivity in the optic nerve and radiations within both early-blind and anophthalmic individuals.

In contrast to these subcortical pathways, the effects of blindness on corticocortical pathways are relatively subtle. Consistent with the evidence that the macrostructure of these pathways develops before the onset of retinal input, neither the strength nor the macroscale topographic organization of callosal connections are dramatically affected by early blindness, and retinotopic organization seems to persist in the absence of visual experience. The main changes in corticocortical white matter that have been observed as a result of early blindness using diffusion-weighted imaging suggest attenuation of occipital to temporal connections, and enhanced connectivity between occipital and frontal cortex (for review, see ref ).

Neurochemistry and microstructure

Extensive literature on animal models suggests that the development of local excitatory and inhibitory networks is strongly mediated by visual experience. Visual deprivation, especially early in development, substantially changes the balance of this intricate system, altering the timeline of occipital cortex development.

The local microstructure and neurochemical balance of cortex is in an immature state at birth. At eye opening, all excitatory neurons appear pyramidal, with a prominent apical dendrite and few small basal dendrites. Excitatory synaptic activity is heavily mediated by N -methyl-D-aspartate (NMDA); over 50% of synapses lack α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptors and are consequently “silent”—functionally inactive. Inhibitory neurons are similarly immature.

Excitatory pyramidal cell properties mature rapidly after eye opening, including pyramidal spine density and the conversion of pyramidal cells to their mature stellate form. Expression of NMDA receptor subunit GluN1 and the shape of the excitatory postsynaptic potential become close to adult-like within the first postnatal year.

This early synaptic activity within the excitatory system triggers the development of inhibitory pathways. Excitatory activity triggers the downregulation of polysialic acid on neural cell adhesion molecule (PSA-NCAM) expression, thereby releasing a brake on precocious plasticity. Excitatory activity also triggers the production of brain-derived neurotrophic factor, which, in conjunction with neural activity, facilitates the maturation of both inhibitory parvalbumin (PV)-containing interneurons and silent synapses. The onset of visual experience triggers the transport of orthodenticle homeobox 2 (Otx2) from retinal to PV cells, where it promotes PV cell maturation. As a result, inhibitory PV cells increase in complexity, and the number of inhibitory connections increases, reaching adult levels by 1.5 years of age.

This changing balance between excitatory/inhibitory (E/I) responses in neural circuitry seems to be partially responsible for controlling the onset and end of the sensitive period. Early in development, excitation dominates cortical circuits, and the resulting neural activity acts as a maturational trigger. As inhibitory processes approach maturation, synaptic drive transitions from dominant excitation to dominant inhibition, helping to trigger the end of the sensitive period.

Overall, in normal development, the number of excitatory synapses continues to increase for several months postnatally, but within the occipital cortex the number of synapses asymptotes at the surprisingly early age of 8 months. However, some of these excitatory synapses lack AMPA and remain functionally silent. It is not until the second postnatal year that most remaining excitatory synapses contain both AMPA and NMDA receptors, rendering them functionally active. Studies suggest a tight relationship between visual experience, the maturation of these silent synapses, and the refinement of local network connections. Between 8 months and 11 years, pruning occurs, with the loss of about 40% of synapses; synapse numbers then remain stable through most of adulthood.

This developmental timeline is fundamentally altered by deprivation. One key mechanism thought to influence the sensitive period is the changing balance between excitatory and inhibitory responses in neural circuitry. In darkness, the excitatory drive is reduced, slowing maturation. As a consequence, visual deprivation delays, prolongs, or “reawakens” developmental levels of plasticity for many processes, including BDNF expression, maturation of inhibitory PV cells, silent synapse maturation, the remodeling of layer 4 pyramidal cells into spiny neurons, neuronal tuning, and visual acuity.

Consistent with the hypothesis that visual deprivation slows or halts development, there is evidence of elevated neurochemical concentrations of excitatory choline and myo -inositol, with indications of reduced inhibitory gamma-aminobutyric acid (GABA) in early-blind and anophthalmic individuals. Elevated levels of both choline and myo -inositol are characteristic of immature cortex; concentrations of both neurochemicals, as measured by magnetic resonance spectroscopy (MRS), are high at birth and gradually decrease in the first few years of life.

Alternatively, higher levels of choline and myo -inositol might possibly reflect long-term upregulation of cholinergic phospholipid pathways consequent on early blindness, as has been found in one animal model. While acetylcholine is a very small component of the MRS choline peak, MRS 1 H choline measurements are a surprisingly reliable surrogate marker of acetylcholine: in animal models the correlation across individuals between measured choline and acetylcholine levels is above 0.8 across multiple brain regions. In the case of myo -inositol, cholinergic activity might stimulate phosphoinositide hydrolysis and thereby raise the level of intracellular inositol. Elevated levels of myo -inositol might also reflect increased astrocyte density; dark-reared animals given access to both an enriched multisensory environment and voluntary exercise show enhanced astrocyte densities compared with control rats or rats dark reared in a nonenriched environment.

Deprived cortex is not simply immature cortex. Total creatine levels in V1 are significantly higher in anophthalmic and early-blind individuals, whereas total creatine levels are lower in infants than in adulthood. As a potential marker of energetic metabolism, increased levels of total creatine in anophthalmia and early blindness are consistent with evidence of upregulated metabolic processing in the occipital cortex of early-blind subjects at rest and during auditory and tactile tasks and animal studies showing increased neuronal excitability.

Gray matter and neuronal tuning

Early enucleation leads to atrophy of the lateral geniculate nucleus (LGN) in both animal models and individuals with anophthalmia, as well as to atrophy of cortical input layer 4 in animal models.

Early blindness and anophthalmia has also been shown to result in a permanent increase in cortical thickness and a decrease in occipital cortical folding in anophthalmic and early-blind individuals. Although this increase in cortical thickness has generally been attributed to a lack of experience-dependent cortical pruning after birth, it is likely that other mechanisms may contribute. Cortical expansion relies heavily on a special type of progenitor cell that generates a fiber scaffold (at around human prenatal day 120), which promotes tangential dispersion of radially migrating neurons and thereby facilitates surface area growth within the cortical sheet. This cortical expansion has a prolonged developmental timeline; for example, in humans, the area of V1 does not reach its adult size until more than 2 years of age. Enucleation in ferrets reduces the density of these progenitor cells, and the resulting lack of tangential dispersion leads to an increase in cell density and cortical thickness, along with a reduction in cortical surface area. It is also possible that the apparent increases in cortical thickness as a result of blindness, as measured using T1-weighted MRI, represent increased T1 contrast owing to reduced myelination within the stria of Gennari (the input layer of V1) rather than a genuine increase in cortical thickness. Increases in cortical thickness and reductions in surface area are more pronounced in anophthalmic than early-blind individuals, suggesting that both prenatal and postnatal developmental mechanisms contribute to these anatomical differences.

However, many aspects of cortical synaptic microstructure seem to be driven by molecular signaling, and develop relatively normally in the absence of visual experience. For example, the retinotopic organization of local connections across V1-V2-V3, whereby a region in V1 precisely connects with regions in V2 that represent the same retinal patch, occurs around prenatal day 200, only shortly after LGN axons innervate the cortex (prenatal day 180). Retinotopic maps in the dorsal LGN and cortex have been shown to persist when retinal waves are disrupted, albeit with reduced precision. More generally, feature-specific microcircuits in layers 2 and 3 develop even in the absence of visual experience. Cells sharing the same stimulus preferences are preferentially connected to each other even in dark-reared animals. Visual experience sculpts and refines these existing feature-based circuits to form microstructures that represent ocular dominance columns, receptive-field orientation and size, direction, and disparity tuning. Sight-recovery subjects blinded at an early age with sight restored in adulthood have low acuity but retain retinotopic organization, and motion-selective responses can be measured within the visual motion area human middle temporal complex (hMT+). Thus, visual experience does not establish neural response selectivities but rather sculpts and refines neural microcircuits that are established well before the onset of visual experience.

The perceptual and neural effects of early vision loss

Early blindness has traditionally been used as a model system for examining sensory plasticity, with the assumption that the behavioral and neural differences found in early-blind individuals are primarily the consequence of sensory deprivation. However, in recent years it has become increasingly apparent that many of the effects of blindness may be due to the strikingly different perceptual and cognitive demands posed by blindness.

There are good reasons to believe that much of the plasticity that occurs as a result of early blindness reflects the development or hyperdevelopment of cognitive rather than sensory processes. For instance, the critical learning period for early blindness extends into the teenage years when sensory processes are adult-like, but cognitive processes are still developing. As described further, blind individuals also show enhanced capacities within many tasks that are clearly cognitive. However it remains unclear whether these enhanced capacities reflect simple excellence through expertise or uniquely heightened capacities available only after sensory loss.

There is now an increased appreciation for the capacity of the developing brain to generate novel cortically specialized areas (e.g., areas specialized for reading and numerosity) in childhood, by “re-purposing” on evolutionarily older circuits evolved for other functions. It seems plausible that the same mechanisms that permit this “neuronal recycling” may underlie cross-modal plasticity and compensatory hypertrophy as a result of blindness. If so, gaining traction on how the brain responds to early blindness is likely to provide important insights into the mechanisms that underlie the complex developmental plasticity that permits novel neurocognitive specializations through training.

Touch and Braille


It is often suggested that blind subjects may have enhanced tactile abilities. However, it is still unclear to what extent there are differences in tactile performance between sighted and blind subjects and, if so, whether this improved performance should be attributed to blindness per se rather than the effects of extensive practice with Braille.

In many (but not all ) studies, blind individuals do not show improved tactile acuity compared with sighted individuals for Braille-like stimuli, especially after sighted individuals undergo some training, With grating orientation and detection judgments, results are similarly inconsistent, with some but not all studies finding enhanced tactile acuity in the blind.

The reasons for these inconsistencies remain unclear. As discussed by Voss et al. the age and gender of the individuals play a role. Results may also depend on how blind individuals read Braille. Not only do individuals differ widely in fluency, but they also vary in whether they read with one or multiple fingers. In individuals who read Braille with more than one finger, the cortical representation linked to the fingers used in Braille appears to merge. Although this may result in greater tactile acuity, it can also result in a topographical uncertainty, which leads to worse performance when asked to discriminate which finger was being lightly stimulated. Whether sensory adaptations designed to enhance Braille reading lead to better or worse performance on a given tactile task may depend on both the particular tactile task and which fingers are tested.

Neural responses

Numerous neuroimaging studies have demonstrated that early-blind subjects consistently show large neural responses for both tactile stimuli and Braille words in V1, extrastriate, and lateral occipital cortices.

One region consistently activated by Braille is the ventral occipitotemporal cortex (vOTC)—a region selectively associated with visual reading in sighted individuals. These responses are likely to reflect feedback from higher-level language areas. The magnitude of blood oxygen level dependent (BOLD) responses to Braille is highly correlated on an individual basis with verbal memory ability, and these vOTC responses in blind, but not sighted, individuals are sensitive to grammatical structure. transcranial magnetic stimulation (TMS) applied to occipital cortex interferes with Braille processing, but only after a delay of 50 to 80 ms, compared with shorter periods of 20–40 ms when applied to somatosensory cortex.

Despite reflecting feedback from language areas, occipital cortex may play a functional role in Braille reading in early-blind subjects: there is the interesting case study of an early-blind woman who, following an occipital stroke, lost the ability to read Braille without loss of her ability to detect Braille letters or loss of her other somatosensory abilities.

BOLD responses in vOTC can be also elicited after extensive training in Braille in sighted individuals, suggesting that these feedback responses may be mediated by practice with Braille, rather than the loss of visual input per se.

Auditory processing



One of the fundamental unanswered questions in blindness is whether blind individuals “hear” better. Blind individuals are more likely to have careers that utilize their auditory abilities (e.g., piano tuner or musician, Fig. 41.4 ), which leads to a common perception that blindness results in an improved sensitivity to auditory information.

Fig. 41.4

( A ) A blind musician playing a harp detail of a wall painting from the Tomb of Nakht ca. 1401–1391 BC. ( B ) The Blind Harpist, John Parry (d. 1782). ( C ) Stevie Wonder Smith. ( C , from Washington D.C. National Assoc. of black owned broadcasters March 1994 © copyright John Mathew Smith 2001”, and listed as being licensed under a creative commons license.

Several studies have noted enhanced pitch and timbre perception in early-blind but not late-blind individuals, although others have not. It is reported that more blind than sighted musicians (~57% vs. <20%) have absolute pitch. Similarly, early-blind individuals are also better at discriminating pitch in both speech-related sounds (vowels) and musical instruments. These enhanced auditory abilities cannot be attributed to blind individuals having more extensive musical experience; blind individuals showed improved pitch discrimination performances even when prior music training was accounted for.

Neural responses

The neural substrates underlying enhanced pitch perception following blindness remain largely unknown. Traditionally, it has been thought that the recruitment of visual occipital areas may contribute to pitch processing in blind individuals. One blind musician has been demonstrated to show occipital responses in a pitch memory task that were not observed in sighted musicians, and an exploratory study found that cortical thickness in certain occipital areas was correlated with scores on a melody task in early-blind individuals.

A number of studies have reported an attenuated response to pure tone stimuli in the temporal lobe of blind individuals compared with sighted controls. This has been interpreted as reduced participation in auditory processing, perhaps owing to increased “efficiency” of processing within the intact modality or owing to function being “usurped” by a reorganized occipital cortex. An alternative explanation is that these previous results might have reflected narrower tuning rather than reduced responsiveness—narrower tuning would be expected to result in a smaller population of neurons responding to any given narrow-band stimulus, which would reduce the measured activation in a sound versus silence comparison. Consistent with this, early-blind individuals are shown to have narrower auditory frequency tuning, as estimated by population receptive-field mapping, within primary auditory cortex.



Early-blind individuals seem to have a similar or enhanced ability to localize single sound sources, and perform similarly or better than sighted individuals when determining which of two sounds is further to the right or slightly higher, especially in the periphery. A variety of other studies similarly suggest an improved ability to process auditory spatial cues, including greater sensitivity to monaural, and spectral cues for spatial location and distance.

However, early-blind participants who are not echolocators have deficits in auditory bisection tasks (deciding which of two speakers an intermediate speaker is closer to), suggesting a weaker representation of allocentric auditory space. This is thought to be due to a lack of visual calibration: auditory localization and bisection performance for sighted individuals is analogously worse for the back than the front plane—presumably owing to lacking eyes in the back of our heads.

This absence of visual calibration may also explain why blind individuals show superior monaural localization abilities in the horizontal plane, but inferior localization abilities in the vertical plane ; monaural cues can be calibrated against binaural auditory cues in the horizontal but not the vertical plane.

Interestingly, a small study ( n = 3) in individuals with central vision loss showed evidence of a selective impairment of central auditory localization abilities, once again suggesting an important role for visual calibration.

Neural responses

Occipital responses during auditory spatial tasks have been reported using PET, event-related potentials, and functional magnetic resonance imaging (fMRI) in early-blind individuals. These cross-modal responses as a result of blindness appear to be right lateralized, within dorsal regions associated with subserving visuospatial and motion processing in sighted individuals. Responses also tend to be larger in those early-blind individuals with superior auditory localization performance,

The recruitment of the occipital visual areas during auditory spatial localization tasks seem to play a functional role: when transcranial magnetic stimulation is applied to the right occipital cortex, it selectively disrupts the ability of blind individuals to localize sounds in space, while not affecting performance for pitch or intensity.

One possibility is that these cortical responses reflect direct auditory cortex input. If so, we would expect these responses to be limited to peripheral stimuli, given that in primates, anatomical connections from the auditory cortex to occipital areas primarily project to the areas that respond to visual periphery. Alternatively (although not exclusively), these responses may reflect feedback driven by connections from higher-level areas normally associated with visuospatial processing. This is consistent with the right hemisphere lateralization that has been observed for these auditory spatial responses—in sighted individuals only lesions of the right hemisphere result in impairments in visuospatial processing.



A variety of studies have suggested recruitment of hMT+, a region strongly associated with visual motion processing in sighted individuals, for auditory motion as a result of early blindness. However relatively few studies have examined the auditory motion perceptual abilities in blind individuals. Early-blind individuals have been shown to outperform sighted individuals both in a simple auditory motion discrimination—the minimum audible movement angle, and in a more complex task requiring judging the overall direction of multiple moving sources.

Neural responses

As noted previously, cross-modal plasticity within the deprived “visual” motion area hMT+ has been extensively studied. In sight-recovery subjects and early-blind individuals hMT+ responds selectively to auditory motion but not to other complex auditory stimuli. In blind individuals, cross-modal responses in hMT+ appear to be direction selective, and are correlated with direction of perceived motion in ambiguous stimuli.

These auditory responses in hMT+ are accompanied by a loss of selectivity to auditory motion in the right planum temporale, an area associated with auditory motion processing in sighted individuals, suggesting that the recruitment of hMT+ may result from competition between cortical areas, as discussed further in the chapter.

Auditory language and cognition


Early studies on language acquisition in blind children demonstrate both delays and differences in language development compared with their sighted peers. One noticeable area of delay is in extending meaning to new referents; when a toddler first learns the word “doggie” she may think it only refers to her dog, but she will learn quickly that this word belongs to a class of objects. There is also evidence that, perhaps unsurprisingly, blind children learn concepts of time before space, whereas space before time is more usual in sighted children. Blind children may also be slower to develop the ability to maintain a shared coherent topic in conversation ; one possibility is that this is due to the lack of visual cues to shared attention, another is that caregiver-initiated exchanges with blind children tend to be somewhat impoverished, focusing on labels rather than description (“two buttons” vs. “that button is bigger and it has more holes”).

These difficulties seem to represent delays rather than permanent deficits—recent studies in adults suggest similar or even enhanced abilities to process auditory language in early individuals: blind individuals demonstrate superior performances in verbal fluency, sentence comprehension, and verbal working memory tasks and even show similar levels of conceptual understanding for verbs that have “visual” meanings when compared with sighted controls.

Neural responses

An increasing number of studies suggest responses within occipital areas in auditory language processing following blindness. The recruitment of the occipital cortex for language seems to be associated with higher-level cognitive (rather than phonological or sensory) processing, and appears to be independent of the ability to read Braille. Activity within the occipital areas to spoken language is greater when blind individuals perform a task that involves attending to the meaning (vs. rhyme) of words and is modulated by the semantic content and grammatical structure of the stimuli. There is some evidence that these responses play a functional role: transcranial magnetic stimulation applied to the anatomical location of V1 disrupts performances in a verb-generation task in blind, but not in sighted, individuals, leading to more semantic errors when producing verbs related to cueing-nouns.

Cross-modal responses in the visual areas have also been observed for other tasks that are primarily cognitive, including verbal working memory, retrieval of episodic memory, cognitive load, and numerical processing. Some functional segregation has been observed: the areas that respond to numbers seem to be anatomically distinct from those that respond to auditory language.

It is not yet entirely clear how or why early areas in the visual processing hierarchy respond during high-level tasks involving auditory language or cognition. As described previously, anatomical evidence suggests that these responses must be primarily mediated by pre-existing white matter connections, rather than via major novel cortical connections.

Similarly to the recruitment of hMT+ for auditory motion, the recruitment of occipital cortex for auditory language/cognition seems to require blindness onset to occur during early childhood. Responses to spoken language in the occipital cortex appear to develop by age 4 years following blindness and are not observed in late-blind individuals, suggesting the existence of a sensitive period that extends beyond infancy.

What are the mechanisms that underlie cross-modal plasticity?

As described previously, blindness leads to a striking reorganization in which regions of the brain that are normally driven primarily by visual input begin to respond to auditory and tactile input and/or cognitive and language tasks. The source, role, and mechanisms underlying cross-modal plasticity following blindness are still largely mysterious.

Given the early prenatal development of white matter tracts, and the failure to find major tract differences as a result of blindness, it seems increasingly likely that white matter structure constrains functional role, rather than the other way around.

One important question is the degree to which occipital cortex responses are driven by low-level sensory signals from subcortical or primary auditory or somatosensory cortex, versus receiving more elaborated top-down signals from higher-level cognitive areas. There is some evidence that the superior colliculus, a “visual” subcortical structure, is recruited by the auditory system in congenital and early-onset blindness, and there is some evidence of direct connections between auditory and occipital cortex. It has been suggested that cross-modal connectivity and correspondences between auditory and visual perception may be stronger in young children, and that in the absence of vision these cross-modal connections are not subject to competitive pruning over development. However, as described previously, there is also strong evidence that cross-modal responses in occipital cortex, especially in earlier visual areas such as V1, may reflect top-down language and/or cognitive processes.

According to the strong form of the “supramodal” or “metamodal” hypothesis, even traditionally unimodal brain areas are carrying out modality-independent tasks whose computations are not inextricably visual. Thus, the role of hMT+ is to determine the motion of objects in space, the role of the visual word form area (VWFA) is to decode structured spatial information for language information, and the role of the fusiform face area is to identify the presence and identity of individuals. Importantly, it is assumed that these supramodal capacities exist in both sighted and early-blind individuals, and merely require either temporary deprivation or training, to be “unmasked.”

A variety of studies have demonstrated auditory and tactile sensory responses within visual areas in sighted individuals, as predicted by the metamodal hypothesis. However, across all these studies, the observed cross-modal responses might also be attributed to known cross-modal modulatory influences or difficulties in accurately identifying particular brain areas. For example, spatial attention to one location is known to “spread” cross-modally—attention to the right visual field increases left hemisphere neural responses within both early visual and auditory areas, and vice versa. Visualization is also known to operate cross-modally, which likely explains why occipital BOLD responses to embossed Roman letters are seen in both blind and sighted individuals. Finally, in some studies the use of group averaging techniques and/or probabilistic atlases may accidentally include neighboring/adjacent regions that are genuinely multimodal. For example, whereas several studies have found auditory and tactile responses within hMT+ in sighted individuals, those studies that rigorously control for cross-modal attentional influences and define hMT+ with extreme caution do not.

An alternative hypothesis is that the cross-modal plasticity in hMT+ observed in early-blind individuals shares many of the same underlying mechanisms as “neuronal recycling” whereby heavily trained skills (such as reading in sighted individuals), rely on the “recycling” of evolutionarily older circuits that originally evolved for different, but similar, functions. Novel functions can “colonize” circuits that are sufficiently close to the required function and sufficiently plastic to reorient a significant fraction of their neural resources to this novel use. This thereby explains the relatively tight homologies of cross-modal function that have been found for many specialized areas within early-blind individuals, as described previously.

This novel specialization is not simply driven by competition between inputs for cortical representation, but also by corticocortical competition across cortical areas for functional role. The first suggestion of corticocortical competition for functional role came from a study of Sadato et al. who found that secondary somatosensory areas were less activated by Braille reading in blind individuals than in sighted controls. More recently, recruitment of hMT+ for auditory motion processing in early-blind individuals has been shown to be accompanied by a loss of selectivity for auditory motion in the right planum temporale ( Fig. 41.5 ).

Fig. 41.5

( A ) It is hypothesized that early in development hMT+ receives both auditory and visual input. ( B ) In sighted individuals, it is believed that auditory inputs are pruned as a result of competition between auditory and visual input. ( C ) In the absence of vision, auditory inputs into hMT+ are strengthened. ( D ) However, it has also been observed that the right planum temporale (rPT) loses selectivity for auditory motion, presumably owing to competition between cortical areas for functional role.

Perhaps the most mysterious aspects of cross-modal plasticity are the auditory and tactile responses that are found in early visual areas such as V1 and V2. As described previously, an increasing number of studies suggest that regions near the occipital pole may be involved in abstract cognitive tasks, such as Braille reading, verbal working memory, language, and even mathematics.

One possibility is that the exuberant reciprocal feedback connectivity between high-level (e.g., lateral occipital cortex [LOC]) and low-level (e.g., V1) visual areas may be repurposed to perform an analogous role as the feedforward connections from LOC to higher-level cortex—further elaborating and abstracting information. Thus, early visual areas become involved in cognitive processes such as verbal working memory, language, and mathematics. One concern with this theory is that it requires assuming that these connections, despite being formed prenatally, are “content neutral” and can be repurposed from their original modulatory purpose (e.g., focusing selective attention and providing an “error signal” within selected subgroups of lower-level neurons ) to subserve the complex computations that would be required in a “reverse hierarchy.”

An alternative explanation is that these cross-modal activations within early visual areas represent an indiscriminate amplification of both intrinsic noise within early visual areas and feedback signals from higher-level visual areas. As described previously, both animal models and measurements in humans show that visually deprived cortex undergoes a shift in the E/I balance toward greater excitation. These feedback signals, in the absence of meaningful feedforward input, do not play a functional role. According to this theory, repetitive transcranial magnetic stimulation (rTMS) interference with cognitive processes such as verbal memory might be due to the shift toward greater excitation increasing the effective amplitude of the rTMS signal, resulting in downstream interference within higher-level areas that are functionally involved in the task.

Recovery of sight after early blindness

Cases of sight recovery after extended periods of congenital blindness provide windows into the functional consequences of early visual deprivation on visual development and brain plasticity. These cases are extremely rare in high-income countries, so until the last decade, insights into the effects of early visual deprivation were primarily based on a handful of case studies. However, in the last decade a series of recent projects have focused on removing congenital cataracts in children in lower-income countries (Project Prakash and Project Eye Opener ). These programs, because their motivations are medical as well as scientific, have resulted in a shift in scientific emphasis away from characterizing visual deficits or abnormalities, toward an emphasis on the ability of these children to make use of visual information in daily life, despite striking differences in the processing of visual information.

Unsurprisingly, given the role of visual experience in fine-tuning synaptic connectivity, early visual deprivation leads to severe losses in the ability to process fine detail (binocular amblyopia). When congenital cataracts (which often permit a small amount of form vision) are removed in children (aged 8–17 years) significant recovery of function occurs in the months after surgery, but acuity remains poor, with Snellen acuity levels generally being worse than 20/120.

Once these acuity losses are accounted for, sight-recovery individuals seem to have little difficulty on most two-dimensional (2D) form or color tasks.

Similarly, provided acuity losses are accounted for, sight-recovery participants do not show significant impairments in motion processing, including being able to understand biological motion. Robust responses to visual motion are seen in hMT+. The retention of visual motion processing in these subjects might be considered somewhat surprising, given the evidence that after early blindness hMT+ selectively processes auditory motion, as described previously. However, data suggest that in cases of sight recovery, visual and cross-modal responses coexist within hMT+. It remains to be seen whether neurons in these areas are jointly tuned for both properties, but stronger transfer of adaptation from auditory to visual motion stimuli has been noted in sight-recovery individuals.

The most striking losses observed in sight-recovery patients seem to be related to three-dimensional (3D) visual inference, including difficulties integrating parts of an object passing behind an occluder over time, Kaniza shapes, an inability to interpret pictorial cues or shape from shading, difficulties recognizing 3D objects from novel viewpoints, and difficulty with more complex face identification tasks. Although performance can improve dramatically in these tasks, the bulk of the evidence suggests that even after decades of restored sight, many aspects of high-level processing remain impaired. Some (although not all) of these improvements in performance likely reflect improvement in the ability to interpret a world that is primarily 2D, rather than the development of a true 3D visual representation of the world.

Consistent with these behavioral studies, fMRI and visual evoked potential (VEP) studies suggest a dramatic and persistent disruption of neural responses within category-selective extrastriate areas.

Finally, several studies have addressed the question of whether an “amodal” conception of objects common to both senses exists a priori, or whether this concept requires experience to develop. Immediately after cataract removal, participants have difficulty matching seen and felt objects, but performance improves rapidly. In a case report of a child who recovered her sight after removal of her dense bilateral cataracts, accurate reaching and grasping behavior was observed within half an hour of her first sight, while visual recognition of previously felt objects required a few days to develop. A more quantitative study found that within months of surgery, visual and haptic integration behavior in sight-recovery individuals became comparable to those who are normally sighted, once visual acuity was accounted for. One interpretation of these initial difficulties in matching seen and felt objects is that there is no “immediate” transfer of amodal knowledge after sight restoration. However, another possibility is that these difficulties do not reflect lack of amodal transfer, but rather simply reflect the known difficulties these individuals have in interpreting 3D objects visually.

Late blindness

The most dramatic behavioral adaptations to blindness are restricted to individuals who become blind early in life. Late-blind subjects tend to be much less fluent at skills such as Braille and use of a cane than early-blind subjects, though there are many individual exceptions. Understanding the reason for this is clinically important because for 87% of blind individuals globally, the onset of blindness occurs after the age of 14, and individuals above the age of 50 contribute over 80% of the global prevalence of blindness.

One reason for this may be that adaptive skills are easier to learn at an early age, when cortex is more plastic. However, the enhanced skills of early-deprived individuals may also be partially due to behavioral and cultural factors—early-blind subjects tend to learn Braille and cane use at school, and as a result learn to rely on Braille and independent navigation skills more heavily than those deprived at a later age. What is most likely is that these two factors may reinforce each other: Early-blind subjects may learn sensory substitution skills more easily than those deprived later in life. This makes them rely on these skills more heavily, giving them practice and thereby further improving those skills; this, in turn, increases the amount of cross-modal plasticity that occurs. In those deprived later in life, this “circle of competency” may be much more difficult to establish.


Despite showing far less cross-modal plasticity (as described further) late-blind individuals—even those who undergo loss later in life, or who undergo only partial visual loss—do show enhanced auditory behavior abilities across many tasks.

Interestingly, in bisection auditory tasks thought to rely heavily on visual feedback for calibration, late-blind individuals appear to retain the sighted advantage suggesting that this calibration is robust to prolonged deprivation.

It is perhaps somewhat surprising that late-blind individuals show similar behavioral enhancements as early-blind individuals across many tasks, given the strong evidence for different underlying neural activation patterns. Although both early- and late-blind individuals must learn to rely on auditory and tactile information, and therefore show comparable improvements in performance (perhaps reaching the limit of the available sensory information), they may reach these performance asymptotes via different neural mechanisms. It is easy to overlook the ability of the brain to produce analogous functional behavior through divergent methods of neuroplastic adaptation. However, the presence of similar auditory behavioral skills in late- and early-blind people does raise the concern that the cross-modal responses that are observed in early-blind individuals may be less critical for the auditory enhancements in performance observed in these individuals than has previously been assumed.

Neural responses

Although both early- and late-blind individuals show novel activations in typically visual brain areas, activation patterns within these areas in late-blind individuals are not only generally weaker, but often involve activation of slightly different neural areas. Thus, although neural functional plasticity is maintained late into life, plasticity later in life is likely driven by different underlying mechanisms.

As described previously, even in early-blind individuals, plasticity is likely heavily constrained by white matter pathways that are established early in development. In late-blind individuals, plasticity presumably must occur within a visual architecture that is fully established.

One possibility is that late-blind cross-modal responses reflect top-down processes such as cross-modal feature-based attention, attentional filling-in, and visualization. One fundamental difference between these two groups is that late-blind individuals may rely heavily on “visualization”—translating tactile and auditory information into a remembered visual sketchpad, whereas it is difficult to even conceive of “visualization” in the context of congenital blindness.

Sensory substitution

Blind individuals must adapt and thrive in a world designed primarily by and for sighted individuals. To meet this challenge, blind and visually impaired individuals have access to a range of technological and behavioral tools, known collectively as sensory substitution technologies.

The range of skills and technologies that can be described as sensory substitution is immensely broad, ranging from learning to estimate when a drinking glass is full by the sound of the liquid and the weight of the glass, to smartphone apps used to navigate, to fully technological wearable prostheses that transmute visual information into verbal descriptions of the world.

Until recently, there were three sensory substitution technologies that were standardly used by blind individuals: Braille (invented in the early 1800s) for reading/writing, and the white cane and guide dog (both established in the early 1900s) for navigation. However, the last few decades have seen a rapid transformation in the field of sensory substitution that has impacted both literacy and navigation.

After decades of being the standard in blind education, Braille is being swiftly replaced by text-to-speech technologies. Over the last few decades, as most written materials have become available in a digital format and dictation software has improved, there has been a precipitous drop in the use of Braille. Digital technologies are less expensive, more accessible (it is much easier to download a book then send it to be transcribed), and support faster “reading” rates. Indeed in the blind community there is a heated debate currently about “the end of literacy” that bibliophile sighted individuals might consider an ominous foreshadowing.

As far as navigation is concerned, the revolution is just beginning. Until recently, most attempts to improve on the guide dog and white cane focused on converting relatively low-level visual information into auditory or tactile cues. Examples include transducing a camera’s image onto a 10 × 10 grid of electrodes placed on the tongue, an echolocation device that transduces ultrasonic echoes containing information about object distance into auditory signals, or the use of auditory frequencies and timbres over time to represent 2D spatial patterns. Although participants could be successfully trained to high performance levels using these devices in the laboratory, to date none has been widely adopted by the blind community. It is not clear whether this is due to the amount of training that is needed to successfully use these devices, the fact that using these devices necessarily interferes with natural auditory input, or it being impossible to transmit enough low-level visual information via audition or touch to be useful outside the laboratory context, in the cluttered visual world.

More recently, huge advances in artificial intelligence (AI) have permitted a radical new approach. First, the improved ability of AI to recognize faces, objects, and text mean that new apps for the blind no longer focus on presenting low-level sensory information—it is now possible to provide detailed auditory cues: “Pyrenees shepherd,” “Fred,” “Waitrose cocktail gherkins.” Second, vast improvements in GPS mapping and navigation technologies and databases provide an infrastructure for providing wayfinding information for blind individuals that was not previously available.

Finally, there is a reawakened interest in a very low-tech solution – human echolocation. Using either the sound of a clicking tongue, or a simple handheld clicker, practitioners are able to navigate complex, novel environments, discern the layout of multiple objects and even identify simple objects. Echolocation can be used to judge the distance of objects, estimate their shape and size, and is sensitive to the unique acoustic absorption characteristics of different objects and materials in the environment . Although scientists were aware of the potential of what was termed “facial vision” since the mid-1700s, blind children were often discouraged from ‘clicking’, which was misinterpreted as a repetitive behavior or a ‘blindism’. Recent activism from within the blind community has led to renewed scientific interest and an increased availability of training in echolocation ( Fig. 41.6 ).

Jun 29, 2024 | Posted by in OPHTHALMOLOGY | Comments Off on The Effects of Visual Deprivation After Infancy

Full access? Get Clinical Tree

Get Clinical Tree app for offline access