18
Advantages of Binaural Hearing
Camille C. Dunn, William Yost, William G. Noble, Richard S. Tyler, and Shelley A. Witt
The human auditory system possesses an amazing ability to hear sounds with two ears and to combine the two signals into one to be processed by the brain. This is called binaural processing. This chapter discusses the cues that our auditory system uses to recognize sounds and to separate them into different sound sources, and how listeners with hearing aids and cochlear implants use these cues to enable binaural hearing.
It should be noted that most of this discussion on how the auditory system processes sounds is based on findings generally tested in a controlled laboratory environment. Therefore, we try to generalize our discussion to how the auditory system may work in a noncontrolled environment (i.e., a local restaurant).
♦ Advantages of Binaural Hearing in Normal Hearing Listeners
Through studies of normal-hearing listeners, we have come to better understand the advantages of hearing with two ears. The ability to determine the location of a sound source is an important function performed by the auditory system. It allows one to know where objects in our world are located. The fact that the auditory system can locate the position of various sound sources aids a listener in sorting out different sound sources in a complex acoustic environment. That is, binaural processing may aid in detecting and attending to target sound sources, such as speech, in a background of competing sound sources (noise).
Because sound does not have a physical dimension in space, locating a sound source requires some form of neural computation by the auditory system. Binaural processing is one such computation that allows the auditory system to determine the location of sound in the horizontal or azimuth plane (the left–right dimension) (Fig. 18–1). This computation is based on the interaction of sound with the body (e.g., the head) of the listener or objects in the listener’s environment. A sound source can be localized in space based on the characteristics of the sound produced by the source. A sound source on one side of a listener arrives at the ear closer to the source before it arrives at the ear farther from the source. This difference in arrival time is called interaural time difference (ITD). The level (loudness) of the sound at the ear nearer the source is also greater than that at the ear farther from the source, generating an interaural level difference (ILD). The binaural auditory system computes these two interaural differences (ITD and ILD) determine the azimuthal location of the sound source. For example, if the ITD and the ILD are zero, then the source is directly in front of the listener, or at some point in the plane bisecting the body vertically. If the ITD and ILD are large, then the sound source’s location is toward one ear or the other.
Figure 18–1 A sound source located in the azimuth plane produces interaural differences of time and level at the listener’s ears.
As will be explained later in the chapter, the ITD is a useful binaural cue only at low frequencies. Thus, above about 1000 to 2000 Hz, the cue used for azimuthal sound localization is the ILD. Between about 700 Hz and 1200 Hz, both the ILD and the ITD provide useful binaural information about azimuthal sound source location. And below about 800 Hz, only the ITD can be used to locate a sound source. The fact that ILDs provide azimuthal information for high-frequency sounds and ITDs provide azimuthal information for low-frequency sounds is referred to as the duplex theory of sound localization (Stevens and Newman, 1936).
Head Shadow Effect
The relative amount of information provided by an ITD or an ILD concerning the azimuthal location of a sound source depends on the frequency of the sound. The ILD is primarily produced by the fact that sound is diffracted around the head. As such, the head produces an “acoustic shadow” at the ear farther from the sound source. That is, the sound arriving at the ear farther from the source is attenuated relative to the sound arriving at the ear nearer the source. This attenuation difference is due to diffraction of sound around the head. The amount of attenuation caused by the head shadow depends on the sound’s wavelength. In general, a sound shadow is produced when the wavelength of sound is shorter than the size of the object (e.g., the head) producing the shadow. The smaller the sound’s wavelength is relative to the size of the object, the greater the shadow and the larger the ILD. Wavelength is inversely proportional to sound frequency. High frequencies have short wavelengths and produce large ILDs. The average human head produces about a 2-dB ILD at 1000 Hz, and as much as a 20-dB ILD at 6300 Hz (Kuhn, 1987). Below about 500 Hz, the ILD is 1 dB or less and is not detectable by the auditory system.
The head shadow provides a potentially useful advantage for detecting a target signal such as speech that is spatially separated from competing sound sources (noise). For example, if the competing sound source is on the side of the head opposite the target signal, then the noise will be attenuated relative to the level of the signal due to the head shadow. The ability to use the higher level of the sound at the ear closer to the sound source to improve signal detection is not a direct function of binaural processing, but is the physical consequence of sound diffraction around an object such as the human head. This chapter discusses how binaural processing of the sounds from spatially separated signal and competing sources can aid in the detection and intelligibility of a signal relative to competing sound sources.
Binaural Squelch
In addition to the ability to use the neurally computed ITD and ILD to localize a sound source, ITDs and ILDs can be used to “squelch” information provided by a spatially separated competing sound source to aid in the detection of a target signal. The ability to detect a target sound at one location in the presence of sounds at other locations is also referred to as the “cocktail party” effect, as explained below.
The ITD is a cue that is limited to sounds with frequencies below about 800 Hz. A sound with a frequency of 800 Hz has a period of 1.25 milliseconds, and a half period of 0.625 milliseconds. The time it takes sound to travel from one side of the head to the other is about 0.625 milliseconds. For sounds with frequencies below 800 Hz (half periods of 0.625 milliseconds or shorter) that are located opposite one ear, each peak of the sound wave will always arrive at the ear closer to the sound source before it reaches the ear farther from the sound source. Therefore, binaural processing of this interaural time difference would indicate that the sound source is opposite the ear closer to the sound source. If the period of the sound is shorter than about 0.625 milliseconds (frequency higher than 800 Hz), then for all peaks of the waveform except the first peak, the peaks arriving at the ear farther from the sound source may occur before those arriving at the ear closer to the sound source, producing a confusing pattern of interaural time differences. Thus, only sounds with low frequencies can produce an unambiguous ITD for indicating the location of the sound source.
Another kind of ambiguity associated with ILDs and ITDs is the cone of confusion. For a stationary listener, a sound at, for example, 45 degrees to the right and in front produces interaural differences nearly identical with those for a sound 45 degrees to the right and behind. All sources on the surface of an imaginary cone, having a slope of 45 degrees, an apex at the center of the head, and an axis that coincides with the line through the two ears (the inter-aural axis) provide the same, or nearly the same, interaural differences. This geometric principle holds for any location displaced from the body’s vertical midline. Various other cues, including monaural outer ear spectral features (Shaw, 1982), interaural disparities in such spectra (Searle et al, 1975), and, more robustly, even very slight movement of the listener’s head (Perrett & Noble, 1997), serve to overcome this ambiguity.
Binaural Summation
Sounds that are presented to both ears are up to 3 dB more audible than sounds presented to only one ear [assuming the level of the sound at the two ears is about the same (Wegel and Lane, 1924)]. In many situations, the perceived loudness of sounds presented to both ears is greater than that presented to only one ear. This perceived binaural loudness difference varies from no loudness differences up to as much as 10 dB (a doubling of loudness) depending on the stimulus conditions (Fletcher and Munson, 1933). Thus, sounds presented bilaterally appear to add together in some fashion and this is referred to as binaural summation. But as will be explained below regarding the masking-level difference (MLD), binaural presentation of a target signal plus a competing sound source presented identically to both ears does not yield lower masked thresholds as compared with conditions in which the signal and maskers are presented to only one ear.
Localization
In our natural listening environments, listeners perceive sounds emanating from all directions. In a clinical setting, sounds are generally presented to listeners from loudspeakers or through headphones. Sounds presented from loudspeakers or sounds produced by natural sources produce ITDs and ILDs based on the interactions between the sound and the head as explained earlier. Sounds presented over headphones can be manipulated so that the sound arriving at one headphone is earlier and more intense than that arriving at the other headphone, creating ITDs and ILDs. For many stimulus conditions, the perception of the sound’s location differs depending on whether or not it was produced by a loudspeaker (or natural sound sources) or by headphones. Thus, the term localization is used to refer to sounds produced by loudspeakers or by natural sound sources, and the term lateralization is used to refer to conditions in which sounds are delivered to the ears via headphones.
Azimuthal sound localization for listeners with normal hearing is best at frequencies below about 800 Hz and above about 2000 Hz. Azimuthal sound localization is the worst between 800 and 2000 Hz. These differences are attributed to the duplex theory of sound localization as explained earlier. The threshold ability to discriminate a change in the location of a sound source is referred to as the minimal audible angle (MAA) (Mills, 1972). The MAA is smallest (˜1 degree) when the sound source is directly in front of the listener and poorest when the sound source is opposite one ear or the other [the MAA is about 5 to 9 degrees (Mills, 1972)]. The fact that MAA is best for sounds directly in front of a listener means that it is often crucial to control for head movements in studying binaural processing so that listeners do not move their heads to take advantage of the small MAA for sounds located directly in front. The average error in azimuthal location of a broadband sound source (e.g., a broad-band noise or an acoustic transient) for listeners with normal hearing is about 5 degrees (Wightman and Kistler, 1989). Sounds, especially narrow-band sounds such as tones, produced in reverberant spaces are more difficult to accurately localize than sounds produced in echo-free spaces (Hartmann, 1983).
Lateralization
When sounds are presented over headphones, the ITD and ILD values can be carefully controlled and manipulated. Such headphone-produced sounds are generally perceived as being lateralized inside of the head rather than out in actual space. Changes in ITD or ILD produce the perception of sounds located at different places along an imaginary line running inside the head from one ear to the other ear. Large ITDs and ILDs place the sound’s perceived lateral location toward one or the other ear. Thus there is a strong correlation between the relationship of ITD and ILD to the perceived location of an actual source and that for the relationship of ITD and ILD to the perceived lateralized location of sounds produced over headphones.
Listeners with normal hearing can discriminate a difference of about 10 microseconds of ITD and about 0.5 to 1.0 dB of ILD, although ITD discrimination is frequency dependent (Yost and Dye, 1991), whereas ILD discrimination is relatively frequency independent (Yost and Dye, 1991). The just noticeable difference in ITD is best at frequencies below 500 Hz and above 750 Hz and worst around 600 Hz; and cannot be measured above about 1200 Hz due the reasons stated above. The fact that binaural processing of ITD cues operates on a scale of 10s of microseconds suggests that the binaural system is extremely sensitive to the temporal fine structure of the sound waveform arriving at the two ears.
“Cocktail Party” Problem
Colin Cherry (1953) argued that one of the cues that listeners could use to attend to one sound source (e.g., one person speaking) at a noisy cocktail party is the fact that different sound source positions are perceived as being at different locations. Thus, sound localization could be used to solve the “cocktail party” problem. Today the cocktail party problem is often referred to as auditory scene analysis (Bregman, 1990), and sound source location is one cue that can be used to analyze an auditory scene.
The MLD produced when a masker is generated with a different set of ITDs or ILDs than the signal is often cited as evidence for the use of binaural processing as a way to sort out an auditory scene or to solve the cocktail party problem (Green and Yost, 1975). The MLD is usually obtained in headphone conditions in which a signal is added to a masker, such that the signal is presented with one set of ITDs and/or ILDs and the masker with a different set. For instance, the masker (M) may be presented the same to both ears (M0, ITD = 0 and ILD = 0) and the signal (S) presented to one ear out of phase to that presented to the other ear (SP). This M0SP condition yields a masked threshold for detecting the signal that can be 15 dB lower than the masked threshold in the M0S0 condition, when the masker and signal are both presented with no interaural differences. In addition, the M0S0 masked thresholds are the same as those obtained when the masker and signal are presented monaurally (m) to the same ear (MmSm). The fact that the SP signal is added to the M0 masker means that nonzero ILDs and ITDs are produced and, as such, the signal-plus-masker is located off of midline, whereas the M0S0 condition stimulus is located at midline. Presumably, binaural processing of the interaural differences yields the increased detectability of the signal in the M0SP condition (Durlach and Colburn, 1978).
A similar improvement in detection occurs when a signal is presented from a loudspeaker at one location and a masker from a loudspeaker located at a different location. The signal can be as much as 10 dB more detectable when the signal and masker sound sources are separated in space (Bronkhorst, 2000), although the signal detection advantage of spatial separation is significantly reduced in reverberant spaces (Plomp and Mimpen, 1981). Spatially separating a signal source from an interfering masker source can also improve the intelligibility of the signal (Bronkhorst, 2000). The bulk of the effect is likely due to differences in head shadow (Bronkhorst & Plomp, 1989).
Precedence Effect
In reflective spaces like rooms (but even outdoors where the ground can be a significant reflective surface), each reflected sound wave is coming to the listener from a different place giving a perception that a sound source were at the location of the reflection. However, despite the many reflections that occur in most acoustic environments, these reflections, although close in level and in time to the sound arriving at the ears from the originating sound source, do not significantly alter the perceptual fidelity and the spatial location of the originating sound source. The sound from the originating sound source always arrives at the ears of a listener before that of any reflection, because a reflected waveform has to travel a longer path. Thus, the fact that such reflections do not appear to alter the sound localization of the originating source implies that the first arriving sound from the originating source takes “precedence” over that arriving from any reflections (Blauert, 1997).
The effects of precedence (Litovsky et al, 1999) include fusion (in most reflective environments, a single sound is perceived rather than a sound with echoes), localization dominance (the location of the perceived sound source is at or near that of the originating sound source, not at the location of any reflection), and discrimination suppression (the ability of the auditory system to discriminate spatial changes in the reflections is suppressed relative to that for the originating source). These effects of precedence occur when the reflections are about 5 to 50 milliseconds later than the originating sound depending on the reverberant conditions and the type of originating sound source. It is presumed that binaural processing plays a role in suppressing the spatial information coming from the later arriving reflections, and is responsible for the effects of precedence.
Summary
The ability to understand sound using two normally functioning ears allows listeners to hear more effectively in noisy environments and locate the directionality of sound. The normally functioning auditory system can effectively detect differences in timing and level differences between the two ears. The ILD and ITD can be used to squelch competing sounds, attend to an ear with a better signal-to-noise ratio, locate the directionality of a sound, or analyze an auditory scene. Listeners with normal hearing in only one ear (and some hearing in the other ear) can suffer significantly in their ability to localize sound sources in the azimuthal plane. This is especially true if the sound is composed of low frequencies, where localization depends on binaural processing of ITD cues. It may be possible for such listeners to gain some ability to process sounds in noisy environments and in sound localization by using their “better ear” to take advantage of the head shadow. However, they might not demonstrate a large benefit from binaural processing, such that possibly occurs for binaural squelch.
♦ Advantages of Binaural Hearing in Listeners with Hearing Aids
A common assumption among clinicians and researchers involved with hearing aid fittings is that two hearing aids must offer an advantage over one hearing aid, the one exception being unilateral hearing loss. Speech hearing in noise and localization are two areas in which advantages of wearing two hearing aids are expected.
Speech Hearing in Noise
The advantage of better speech understanding in the presence of noise for bilateral hearing aid fittings derives from the same principles explained earlier, especially the head shadow and squelch effects. [Following a point argued previously (Noble & Byrne, 1991), the terms unilateral and bilateral are used to refer to fitting profiles, thereby making no assumptions about whether wearing one hearing aid precludes binaural hearing or wearing two confers it, neither of which will automatically be the case.] It is relatively straightforward to provide conditions in a laboratory that demonstrate the advantage of a bilateral hearing aid fit. If someone with hearing loss in both ears is presented with a target signal such as speech on one side of the head and a competing signal such as noise on the other side, a single hearing aid on the side of the noise does the listener a disservice. The head shadow effect reduces the overall level of the speech, whereas the hearing aid amplifies the unshadowed portion of the noise. With hearing aids in both ears, by contrast, the shadowing effect is at least equalized and can provide an advantage depending on the relative location of the target sound versus the competing signal. Furthermore, depending on the profile of the hearing loss, there may be the additional benefit of the interaural comparison that characterizes the phenomenon of binaural squelch.
The typical profile of a sensorineural hearing loss shows increased hearing threshold levels at higher frequencies versus those thresholds at lower frequencies. As discussed earlier in this chapter, the head shadow effect is greater for higher frequency sounds. Because of this, the differences in signal-to-noise ratio between shadowed and unshadowed sides for individuals with typical high-frequency sensorineural hearing loss are less noticeable when unaided than for a normally hearing listener. Binaural squelch, by contrast, is most pronounced for low-frequency sounds, given that low-frequency features of waveforms to the two ears are more readily compared at the level of the central nervous system. Thus, for people with the typical sensorineural profile, it may be less likely that bilateral fitting of hearing aids will promote an advantage over a unilateral fitting in terms of improving central processing (Dillon, 2001). It may be assumed, however, that bilateral fitting will improve the audibility of higher frequency components of signals sufficiently to return at least some part of the head shadow advantage. Dillon (2001) argues that “bilateral advantage arising from head diffraction will be least for those whose high-frequency hearing loss is only mild and for those whose high-frequency hearing loss is so severe that the high frequencies make no contribution to intelligibility” (p. 381). By implication, for the greater majority of listeners with sensorineural hearing losses, the fitting of bilateral hearing aids will afford this advantage.
In a laboratory-based study (Köbler & Rosenhall, 2002) involving patients with the typical sensorineural profile, a small advantage in speech intelligibility for bilateral fitting was observed. A Swedish report (Arlinger et al, 2003) inquiring into what is known about the effectiveness of hearing aids included comprehensive reviews of literature worldwide. It stated that “laboratory studies [show] two hearing aids can provide better speech comprehension than a single hearing aid.” But this report goes on immediately to say, “there is no support from controlled clinical trials to show whether two hearing aids are superior to one” (p. 9). Thus, although laboratory conditions indicate that bilateral fittings should be better than a unilateral fitting in the domain of speech hearing in noise, no good field trial evidence exists to support the proposal. This is a serious gap in our knowledge of the benefits of bilateral hearing aid fittings. It needs to be determined whether bilateral hearing aid fittings actually pay off in making a difference in people’s lives. We summarize, at the end of this section, the most recent outcome of an ongoing project, which indicates that researchers and clinicians may not have been looking in the right places to uncover the real benefits of fitting people with two hearing aids.
Localization
The picture is clearer for the domain of localization with regard to the value of bilateral hearing aid fittings. The term localization is understood to refer particularly to the directional component (especially in the horizontal plane) of a more general spatial hearing function (Blauert, 1983), a function that extends to discrimination of distance and movement. The dependence of the directional component of spatial hearing on an effective binaural system was spelled out earlier. The evidence in favor of bilateral fittings for directional hearing is fairly well established, although the research findings indicate that the benefit may not be universal. Much of the research on hearing aids and sound localization is reviewed in Byrne and Noble (1998). Especially for patients with more severe loss, two hearing aids offer unquestioned advantage over one. This has been shown in performance testing of experienced unilateral and bilateral users (Byrne et al, 1992). The factor of experience is critical, however, as there is abundant evidence that listeners adapt to changes in interaural cues, brought about by changes in such things as hearing aid profile (Byrne & Dirks, 1996). The implication is that initial performance with two hearing aids, for someone who has been used to wearing only one, might be worse, but that performance improves as the listener adapts to the new profile.
For patients with milder hearing loss, the performance picture is less clearly in favor of bilateral hearing aid fittings. Byrne et al. (1992) found that for average losses (over 0.5, 1, 2, and 4 kHz) up to 40 dB, no performance difference in horizontal plane localization was observed between experienced listeners aided unilaterally versus those aided bilaterally. In this study, signals were presented at the listener’s most comfortable level. It is understood, however, that if the level of a target signal were to be reduced to the point where it was inaudible in the unaided ear, such similarity of performance between unilaterally and bilaterally aided listeners would no longer be observed. Nonetheless, in a study based on self-assessment of spatial hearing (Noble et al, 1995), two groups of experienced unilateral and bilateral users, closely matched for degree of hearing loss, rated their aided ability as equally improved over their unaided. The average losses of the two groups were 45 dB. If the performance data noted above are reliable, rated ability aided should be better for a bilateral than a unilateral sample with more severe levels of hearing loss.
One clinical group that shows particularly strong localization benefit following bilateral hearing aid fitting is those with predominantly bilateral conductive hearing loss. Unaided, such patients reveal severe performance decrements (Häusler et al, 1983; Noble et al, 1994). The likeliest explanation for this is loss of acoustic isolation between the cochleae, taking the form of a greater proportion of sound energy reaching both cochleae virtually simultaneously, that is, via the skull, hence negating interaural differences (Zurek, 1986). Aided bilaterally, many with this form of hearing disorder show marked improvement in localization (Byrne et al, 1995), presumably because there is an increase in air conduction energy, hence some recovery of cochlear isolation. It follows from the acoustics of bilateral conductive hearing loss that only bilateral aiding can have this beneficial effect.
New Findings About Bilateral Hearing Aid Fittings
A project dedicated to examining a broad range of binaural hearing functions has begun with the development of a self-assessment scale, derived in part from previous work reported on by Noble et al (1995). The resulting questionnaire is titled the Speech, Spatial and Qualities of Hearing Scale (SSQ) (Gatehouse & Noble, 2004). The SSQ covers an extensive range of speech hearing contexts. Some of the content covered includes circumstances of one-to-one and group conversation in quiet and noise; a range of contexts calling for divided and rapidly switching attention; various aspects of the distance and movement as well as directional components of spatial hearing; and a set of questions addressing segregation, clarity, naturalness, and recognition of everyday sounds, together with items on listening effort.
The SSQ includes questions addressing speech hearing contexts that are challenging even in the case of normal hearing, for example, being able to follow one speech stream while at the same time ignoring another, following two speech streams simultaneously, and following conversation in a group where speakers switch rapidly. Adding movement discrimination to the coverage of spatial hearing is also a new inclusion. Each of these features is nonetheless argued to be common in effective functioning in the everyday world, a point we will return to.
The SSQ was applied as an interview to a large sample of first-time clients at a hearing rehabilitation clinic. Independently (i.e., in advance of the initial appointment at the clinic, and with no prior knowledge they would be interviewed), these clients completed a short questionnaire inquiring about handicaps (i.e., social limitations and emotional problems stemming directly from their hearing impairment). The aim was to find out which elements of the range of disabilities in the speech, spatial, and quality domains would best predict the social/emotional handicaps that stem from them. It emerged that “identification, attention and effort problems, as well as spatial hearing problems, feature prominently in the disability-handicap relationship, along with certain features of speech hearing. The results implicate aspects of temporal and spatial dynamics of hearing disability in the experience of handicap” (Gatehouse & Noble, 2004, p. 85).
A further examination of these data (Noble & Gatehouse, 2004a), separately for patients with interaurally more asymmetric hearing loss, showed that the dynamic spatial and more demanding attentional aspects of hearing were particularly associated with handicap in that particular group. By contrast, in patients with more symmetrical hearing loss, all aspects of disability were correlated with the handicap experience. This finding gave a strong indication about what might be expected from unilateral versus bilateral hearing aid fittings.
In a subsequent application, the SSQ was administered to patients newly fit with one versus two hearing aids, after 6 months of use with their device(s) (Noble & Gatehouse, 2004b). It emerged that, compared with those unaided, two aids do not seem to provide any further noticeable benefit over one hearing aid in the traditional speech hearing domains, namely, one-on-one and group conversation in quiet and in noise. Rather, it is in the more substantially demanding contexts of following two speech streams, or suppressing one to follow another, or rapidly switching attention among speakers, that bilateral fitting demonstrates leverage in reducing disability. In the spatial domain, although the directional component shows a moderate benefit from bilateral over unilateral fitting, it is with the more dynamic elements of distance and movement discrimination that two aids demonstrate more evident benefit. Finally, bilateral fitting (not unexpectedly) confers a notable benefit over unilateral in permitting greater ease of listening. This is to be expected on the basis of the fact that the more demanding contexts are less disabling with two aids, hence the effort needed to cope in these demanding contexts is less extreme.
It seems fair to say that being able to better handle difficult and challenging communication tasks, and being able to better orient and navigate in the dynamic audible world, enhances a sense of competency and reduces the sense of being socially restricted and emotionally distressed. It still remains to develop performance measures that will allow more precise investigation of the domains of hearing served by binaural function. In the meantime, it is clear that new arenas have been uncovered in which bilateral hearing aid fittings confer their real benefit, and these merit further study and analysis.
Summary
Laboratory studies and a recent subjective questionnaire have demonstrated the benefits of wearing bilateral hearing aids over wearing only one hearing aid. The benefits include improved dynamic localization abilities, particularly distance and movement discrimination, and better management of demanding conversational situations such as following two speech streams, suppressing one speech stream to follow another, or rapidly switching attention among speakers. It may be predicted that someone with hearing losses (HLs) in both ears in excess of about 40 dB HL over 0.5 to 4 kHz will be more likely to experience these sorts of benefits when using two hearing aids. In less demanding contexts, however, it may be harder to distinguish the benefit of two over one aid. That said, the real-world dynamics and demanding communicative contexts confront people with hearing loss no less than anyone else.
♦ Advantages of Binaural Hearing in Listeners with Cochlear Implants
Individuals who utilize a cochlear implant fall into many different patient profiles. For the purposes of this chapter, we focus on people implanted with a standard long array who fall into the following three categories: patients implanted unilaterally who do not wear amplification in the contralateral nonimplanted ear [cochlear implant (CI) only]; patients implanted unilaterally who wear amplification in the contralateral nonimplanted ear [CI and a hearing aid (HA), denoted as CI + HA]; and patients implanted bilaterally who wear a cochlear implant in each ear (CI + CI).
The CI-only patients do not benefit from most of the advantages of hearing with two ears that were discussed earlier in this chapter. Patients who have only one cochlear implant and do not use amplification in the contralateral ear are not hearing with two ears and will only have the potential to benefit from the head shadow effect. As previously discussed, the head shadow effect is a result of a physical phenomenon rather than due to binaural processing. Unilateral (CI-only) patients have only a 50% chance of benefiting from the head shadow effect because they only have one hearing device to potentially face away from a noise source.
Individuals who wear a cochlear implant and a hearing aid in the nonimplanted contralateral ear have the potential to use binaural processing by utilizing both the electrical signal from the cochlear implant and the acoustic signal from the hearing aid. In general, the cochlear implant serves as the more dominant ear because often the hearing aid alone provides very little benefit. Typically, the hearing aid serves to complement the information received by the cochlear implant. Most CI + HA listeners comment that regardless of the benefit they feel they are receiving from their hearing aid, wearing the hearing aid gives them a more balanced or “full” feeling.
Individuals implanted bilaterally with two cochlear implants have shown promising results with regard to advantages in better speech perception in noise and localization. Perhaps because both ears are being stimulated similarly (i.e., electrically), the ears are able to fuse information providing for many binaural advantages.
Speech Hearing in Noise
The binaural advantages most commonly studied for CI + HA or CI + CI listeners in laboratory conditions are the head shadow effect, binaural summation, binaural squelch, and localization. Some listeners wearing a cochlear implant and a hearing aid in the contralateral ear are able to better understand speech in noise and to identify sound location. However, because one ear is receiving electrical information and one ear is receiving acoustical information, fusing the two different signals is often difficult. Although listeners wearing a cochlear implant and hearing aid receive more binaural advantages than simply wearing one device alone, binaural advantages are not shown in all situations for all listeners (Dunn et al, 2005). Typically, the cochlear implant and the hearing aid are programmed independently from one another. It seems reasonable to assume that the two ears may be providing the central auditory system with distorted ITDs and ILDs, resulting in misrepresented information. More research needs to be done investigating the appropriate way to coordinate the programming of a cochlear implant and hearing aid to promote fusion of the electrical and acoustical signals.
Bilaterally implanted (CI + CI) listeners receive electrical stimulation from both ears, making it easier for the brain to process and fuse the two signals from each ear. In addition, both cochlear implants are programmed to provide equal loudness and frequency information across the two devices. Although this may not provide precise timing of intraaural electrical stimulation, bilateral coordination of pulsed signals is possible (Tyler et al, in press; van Hoesel & Clark, 1997; van Hoesel & Tyler, 2003).
Adding a second cochlear implant or hearing aid to the contralateral ear enables a listener to take advantage of an ear with a better signal-to-noise ratio resulting in improved speech perception in noise due to the head shadow effect (Gantz et al, 2002; Muller et al, 2002; Tyler, Dunn, Witt, & Preece, 2003; Tyler, Parkinson, Wilson, Witt, Preece, Noble, 2002; Van Hoesel & Tyler, 2003; Van Hoesel et al, 2002). In addition, adding an ear with a poorer signal-to-noise ratio allows the auditory system the opportunity to compare timing, amplitude, and spectral differences, canceling parts of the waveform to provide better speech understanding in noise by use of binaural squelch (Gantz et al, 2002; Muller et al, 2002; Tyler et al, 2002, 2003; Van Hoesel et al, 2002).
Fig. 18–2 shows sentence recognition scores with the speech from the front and noise facing the front and left for two bilateral cochlear implant users. In the noise front condition of Fig. 18–2, when the left-and right-only scores are compared with the bilateral score, an advantage is seen for binaural summation. When the left-only score is compared with the binaural score, the bilateral cochlear implant listeners show that they are able to use the ear with a better signal-to-noise ratio to understand speech more effectively. When comparing the right-only and bilateral scores in the noise left condition, an advantage is also shown for the bilateral condition indicating a binaural squelch effect.