16 Audiological Management II The previous chapter covered the basic concepts and technical considerations involved with hearing aids. We will now continue with our coverage of audiological management with a discussion of the use of hearing aids within the overall framework of audiological management, and will then continue to discuss cochlear implants, hearing assistance technologies, intervention approaches, and tinnitus management. The primary step in audiological management involves determining the need for amplification, and then providing the patient with it, always staying mindful of the fact that the hearing aid is part of the aural rehabilitation process, and not an end unto itself. This concept is by no means limited to hearing aids, but applies equally to cochlear implants, hearing assistance technologies, and tinnitus management. In each case, the overall process involves considerations of the patient’s candidacy for intervention; the selection of the instrument; verification of its fit, functioning, and ability to provide the prescribed characteristics; and validation of the patient’s performance; as well as counseling, monitoring, and follow-up. Who is a candidate for audiological interventions, such as amplification? The consensus of current opinion is that any patient complaining of auditory difficulties in communicative situations should be viewed as a potential candidate for audiological intervention, such as hearing aids or other kinds of assistive devices (e.g., Hawkins, Beck, Bratt, et al 1991). Clearly, the patient should have some degree of hearing loss, and the need for amplification unquestionably rises as the degree of hearing loss worsens. Hence, it is easy to say that a patient with a pure tone average (PTA) or speech recognition threshold (SRT) of 50 dB HL in both ears needs a hearing aid. But let us see what happens when we play the “countdown” game: Is a hearing aid needed at 45 dB? Absolutely. At 40 dB? Yes. At 35 dB? Of course. At 30 dB? Sure. How about 25 dB or 20 dB? Well, uh … Notice how we quickly reach a range of losses where we cannot definitively say “yes” or “no.” One of the reasons for hedging on the answer is that the overall degree of hearing loss is only one of the many variables to be considered. For example, hearing losses typically increase with frequency, so a patient can have quite a nasty hearing impairment even though the PTA and/or SRT may be just 10 dB (or even zero). Then, of course, there is the issue of unilateral and asymmetrical impairments: Does Mrs. Jones need a hearing aid if her right ear has a 50 dB loss but her left ear is normal? What about Mr. Smith, who has a 60 dB loss in the right ear and a 30 dB loss in the left? Complicating matters is the distinction between the need for amplification due to the extent and impact of the auditory deficit versus how much benefit the patient experiences from the hearing aid. When the hearing loss is moderate to severe, unaided speech communication is belabored or impossible (need), and this situation improves appreciably albeit not totally when hearing aids are worn (benefit). What’s more, the improvement is readily appreciated by the patient and by others. However, the need for amplification can be ambiguous in cases considered to be marginal because of a mild, high-frequency, or unilateral loss. Here the degree to which the hearing loss affects speech communication is often subtle, inconsistent, and situational, depending on such factors as speaking level, whether there are background or competing noises, and the communicative demands of the patient’s occupational and social interactions. The benefits of amplification can be similarly subtle and inconsistent in patients with marginal impairments, so that a patient may need a hearing aid but perceive little or no benefit from it. However, little perceived benefit does not mean no benefit. The subtle benefits of amplification often become apparent when the patient forgets to bring his hearing aid to an important meeting or must do without the instrument while it is being repaired. At the opposite extreme, patients with profound losses have the greatest need for hearing aids, but they often receive relatively little benefit in terms of auditory speech reception because their residual auditory capability is often minimal. Again, however, remember that limited benefit for the purpose of hearing speech is not the same as no benefit at all. On the contrary, patients with profound losses benefit considerably from their hearing aids in terms of the ability to hear for alerting, warning, and emergency signals, and as an aid to lipreading. Clearly, hearing aid candidacy depends on more than auditory status alone, and is particularly affected by the patient’s communicative requirements and the need to rely on auditory information. Other motivational factors interact with the hearing loss to induce the patient to see an audiologist, and then to follow the recommendation to obtain hearing aids and to use them. Some of the major factors that appear to motivate patients to obtain a hearing aid for the first time include communication problems at home, in noisy listening situations, in social situations, and at work, as well as encouragement by the spouse (Bender & Mueller 1984). In fact, Palmer, Solodar, Hurley, Byrne, and Williams (2009) showed that self-assessed hearing ability was a strong predictor of whether many patients will actually purchase hearing aids. Just one question is required: “On a scale from 1 to 10, 1 being the worst and 10 being the best, how would you rate your overall hearing ability?” (p. 342). Most patients with ratings of 1 to 5 actually obtained hearing aids and most with ratings of 8 to 10 did not; but instrument purchases were 50-50 for those who rated their hearing ability at 6 or 7. Acceptance of the hearing impairment itself and of the need for clinical assistance to deal with it also weighs heavily in the patient’s decision to obtain amplification. A patient in one of the “marginal” categories is often not willing to accept himself as hearing impaired, let alone so much so that amplification is needed. This is particularly true when the loss has developed slowly and insidiously over a long period of time. Special candidacy issues come into play with pediatric patients, and it is essential for comprehensive intervention including appropriate amplification to be introduced as soon as possible (e.g., PWG 1996; JCIH 2007, 2013; AAA 2013). It is important to be mindful that infants and children are in the process of developing auditory skills, speech, language, and world knowledge. Thus, the hearing-impaired child is faced with a double challenge because (1) development in these areas relies heavily on auditory input, and (2) she cannot depend on linguistic and world knowledge to make up for missed sounds. At this point, it important to mention a large-sample prospective study by Ching, Dillon, Marnane, et al (2013) because it overcame many of the limitations of prior studies that failed to provide cogent support for early intervention for young children with prelingual hearing loss (see, e.g., Puig, Municio, & Medà 2005; Wolff et al 2010). They found that better speech/language and related performance in prelingually hearing-impaired 3-year-olds was significantly associated with (1) cochlear implants that were turned on at an earlier date, (2) less severe hearing losses, (3) no other disabilities, (4) female gender, and (5) higher maternal education.1 The significant impact of cochlear implants highlights the efficacy of early intervention with children with severe and profound losses. In fact, delaying cochlear implantation from 10 months to 24 months was associated with a dramatic decrease in performance scores at age 3. In addition to the considerations already discussed, hearing-impaired infants and children are also candidates for amplification if they have unilateral and/or mild losses, or permanent conductive losses, as well as for a trial period with hearing aids when cochlear implants are being considered and in cases of auditory neuropathy syndrome disorder (see, e.g., GDC 2008; Roush, Frymark, Venediktov, & Wang 2011; AAA 2013). How the patient experiences hearing difficulty as well as a variety of nonauditory factors enter into audiological management, involving a host of social, emotional, occupational, health, and other issues. In addition to addressing hearing disorders in terms of structure and function, contemporary thinking focuses upon restrictions of activities (task performance) and participation in day-to-day situations (e.g., WHO 2001; ASHA 2004), and health-related quality of life (AAA 2006a,b). In fact, amplification improves one’s health-related quality of life by mitigating the social and psychological impact of hearing impairment, at least for adult patients (Hnath-Chisolm, Johnson, Danhauer, et al 2007). Because of all of these factors, self-assessment scales play a central feature in assessing the impact of hearing impairment, candidacy for hearing aids and other treatment approaches, and validating the outcomes of audiological management (e.g., ASHA 1998; AAA 2006a, 2011a, 2013). 1 It might seem odd that earlier versus later hearing aid fitting is missing from the list. However, this makes sense when we realize that most of the children with hearing aids had mild or moderate losses, so that they may have been benefiting from auditory stimulation before receiving their aids, and may not have been enrolled in programs working on auditory skill development over time (Ching, Dillon, Marmane, et al 2013). Functional assessment or self-assessment scales and questionnaires are valuable tools for identifying the nature and scope of a patient’s communicative and related problems. These measures are also known as outcome assessment scales because they are also used for ascertaining the effects of intervention as experienced by the patient. Table 16.1 shows representative scales typically used with adults and the topics they address, and many of the scales used with infants and children are shown in Table 16.2. In addition to describing the direct communicative impact of the hearing loss, many of these instruments also provide insights about its psychological, social, and vocational impact upon the patient. In addition, some instruments assess the impact of a patient’s hearing impairment upon the quality of life experienced by the spouse (see, e.g., Scarinci, Worrall, & Hickson 2009; Preminger & Meeks 2012). Most self-assessment instruments present statements or questions to which the patient must reply along a scale describing how common or true a situation is, how much difficulty or benefit is experienced, or how much he agrees or disagrees with what it says. For example, he might reply on a scale of 1 to 5 (or 1 to 3, 1 to 7, etc.) from “always” to “never,” “very little” to “very much,” or “completely agree” to “completely disagree.” The information derived from self-assessment scales helps define the kinds of rehabilitative efforts needed by isolating specific problems and difficult environmental situations; it also provides us with a basis for counseling the patient and significant persons in her life. In addition, the outcome of a hearing aid fitting or other aspects of the rehabilitation process can be validated by comparing self-assessment scales administered before and after intervention, as well as over the course of the management program. Some scales ask the patient to indicate several areas of concern or special relevance to his own situation, which become the basis for assessing intervention outcomes. Self-assessment scales are also used in the audiological evaluation process, as well as in hearing screening programs. The hearing-impaired listener should be provided with usable binaural hearing whenever possible. The desirability of binaural amplification reflects several advantages of binaural hearing compared with monaural hearing (Valente 1982; deJong 1994; Gelfand 2010): Binaural summation results in an advantage over monaural listening corresponding to ~ 3 dB at threshold and ~ 6 dB at suprathreshold levels. Binaural difference limens for both frequency and intensity are smaller (better) than the corresponding values for monaural hearing. Binaural hearing maximizes directional hearing ability (e.g., sound localization) and alleviates the head shadow effect. In addition, binaural listening reduces the adverse effects of noise and reverberation on speech intelligibility in real-world listening conditions. For practical purposes, this means that binaural amplification is preferred over a monaural hearing aid for most patients with bilateral hearing losses, and many patients with unilateral hearing losses should benefit from the use of a hearing aid in the impaired ear. Binaural amplification clearly is the recommended protocol for children, including bimodal stimulation for children using a cochlear implant in one ear if the opposite has any residual hearing (AAA 2013). The term bimodal refers to the use of a cochlear implant in one ear and a hearing aid in the other ear. As one would expect, there are cases where binaural amplification is contraindicated, and these are discussed below. There is also a compelling negative reason to choose binaural amplification whenever possible. Silman, Gelfand, and Silverman (1984) demonstrated that adults with bilateral sensorineural hearing losses who use monaural hearing aids can develop an auditory deprivation effect in which speech recognition scores deteriorate over time in their unaided ears even though intelligibility scores remain unchanged in their aided ears. In contrast, binaural hearing aid users do not experience a speech recognition deficit in either ear. This phenomenon has been corroborated by numerous studies in both adults (e.g., Gelfand, Silman, & Ross 1987; Stubblefield & Nye 1989; Silverman & Silman 1990; Silman, Silverman, Emmer, & Gelfand 1992; Hurley 1993, 1999; Gelfand 1995; Silverman, Silman, Emmer, Schoepflin, & Lutolf 2006), and children (Gelfand & Silman 1993). Once a patient has developed an auditory deprivation effect, adding a second hearing aid to the previously unfitted ear may reduce or alleviate the problem in many but not all cases (Silverman & Silman 1990; Silman et al 1992; Hurley 1993; Gelfand 1995). In addition, some patients actually reject a second hearing aid even though the auditory deprivation effect has been reduced or eliminated by its use (e.g., Hurley 1993). The foregoing discussion demonstrates that binaural hearing aids should be the first consideration for patients with bilateral hearing losses. This conclusion agrees with the consensus of opinion in the field (e.g., Hawkins et al 1991; PWG 1996; AAA 2013). However, binaural amplification is not always the best overall choice for every patient. For example, Walden and Walden (2005) reported that poorer binaural than monaural performance was reported among patients in the 50- to 90-year-old age range. Moreover, some patients experience a binaural interference effect, in which the participation of the more impaired ear (as with binaural hearing aids) results in a deterioration of performance rather than an improvement (Jerger et al 1993). In addition, binaural hearing aids may be rejected by the patient due to any number of reasons that might be perceptual, behavioral, physical, cognitive, emotional, and/or financial. Table 16.1 Characteristics of several self-assessment instruments
Candidacy for Audiological Intervention
Functional Assessment Scales
Binaural versus Monaural Amplification
Scale | Author(s) | Characteristics assessed |
Abbreviated Profile of Hearing Aid Performance (AP-HAB) | Cox & Alexander (1995) | Abbreviated version of PHAB addressing ease of communication, reverberation, background noise, aversiveness of sounds. |
Attitudes toward Loss of Hearing Questionnaire, v. 2.1 (ATLH) | Saunders, Cienkowski, Forsline, & Fausti (2005) | Hearing loss denial; negative associations; negative coping strategies; manual dexterity and vision; hearing-related esteem. |
Client Oriented Scale of Improvement (COSI) | Dillon, James, & Ginis (1997) | Patient nominates up to five specific situations of concern in order of importance, before treatment. Each situation is rated according to (a) how much better/worse and (b) ease of communication, after treatment. |
Communication Profile for the Hearing Impaired (CPHI) | Demorest & Erdman (1986) | 25 scales addressing communicative performance, importance, environment, strategies, personal adjustment, response biases. |
Communication Scale for Older Adults (CSOA) | Kaplan, Bally, Brandt, Busacco, & Pray (1997) | Communication strategies, communication attitudes; for use with non-institutionalized elderly adults. |
Expected Consequences of Hearing Aid Ownership (ECHO) | Cox & Alexander (2000) | Patient expectations about hearing aids in terms of (a) acoustical benefits, (b) service and cost, (c) negative features, (d) psychosocial implications. |
Glasgow Hearing Aid Benefit Profile (GHABP) | Gatehouse (1999) | Addresses initial disability, handicap, hearing aid usage, hearing aid benefit, residual disability, and satisfaction for each of four situations plus four additional situations nominated by the patient. |
Hearing Handicap Inventory for Adults (HHIA) | Newman, Weinstein, Jacobson, & Hug (1991) | Modification of HHIE for use with non-elderly adults. |
Hearing Handicap Inventory for Elderly (HHIE) | Ventry & Weinstein (1982) | Social-situational, emotional. |
Hearing Impairment Impact—Significant Other Profile (HII-SOP) | Preminger & Meeks (2012) | Impact of hearing loss on spouse’s quality of life: emotions and relationship, social impact, spouse’s communication strategies. |
International Outcome Inventory for Hearing Aids (IOI-HA) | Cox & Alexander (2002) | Patient rates hearing aid outcome experience based on daily usage, benefit derived, continuing activity limitations, satisfaction, continuing participation limitations, impact on others, quality of life. |
Profile of Hearing Aid Performance (PHAB) | Cox, Gilmore, & Alexander (1991) | Situations involving familiar talkers, ease of communication, reverberation, reduced cues, background noise, aversiveness of sounds, distortion of sounds. |
Satisfaction with Amplification in Daily Live (SADL) | Cox & Alexander (1999) | Global (overall) satisfaction with hearing aid, positive effects, service and cost, positive image, negative features. |
Self-Assessment of Communication (SAC); Significant Other Assessment of Communication (SOAC) | Schow & Nerbonne (1982) | Communication situations, feelings, opinions about others’ views. SOAC completed by familiar other. |
Significant Other Scale for Hearing Disability (SOS-HEAR) | Scarinci, Worrall, & Hickson (2009) | Impact of hearing loss on spouse’s quality of life: communication changes; communicative burden; relationship changes; going out and socializing; emotional reactions; concern for partner. |
Speech, Spatial and Qualities of Hearing Scale (SSQ) | Gatehouse & Noble (2004) | Addresses both static and dynamic listening conditions. Speech hearing section includes quiet, noise, reverberation, multiple talkers, selective/shifting attention). Spatial hearing section includes direction, distance, motion. Other qualities section includes sound recognition, segregation, clarity, naturalness, listening effort. |
Candidacy for binaural hearing aids becomes less clear when patients have asymmetrical losses, that is, bilateral impairments that are significantly different between the two ears. The “traditional wisdom” was that preference for binaural hearing aids begins to wane when the two ears have pure tone averages that differ by more than 15 dB and/or speech recognition scores that differ by more than 8%; and binaural amplification is progressively less desirable as the inter-ear difference gets wider. It is reasonable to anticipate that patients with large inter-ear differences probably will not have the same chances of success with binaural hearing aids as their counterparts with more symmetric losses. However, there is no reason to rule out binaural amplification without even a try simply because there is a difference between the ears (e.g., Sandlin & Bongiovanni 2002; AAA 2013). Of course, one should be alert to the possibility of the binaural interference phenomenon when binaural hearing aids are rejected or when binaural performance is poorer than that of the better ear alone.
Open-Canal Fittings
Hearing aid fittings that leave the ear canal unoccluded are often considered for patients with sloping sensorineural hearing losses who have good sensitivity in the low frequencies. The basic approach is to use an open earmold or a vent drilled through the earmold, which facilitates high-frequency amplification without also amplifying the low frequencies. This approach has been used for a long time, and was sometimes referred to as IROS (“I” for “ipsilateral”) on analogy to CROS, although the term is a misnomer. There has been a major increase in the use of open-canal fittings in which mini-BTE hearing aids are connected to the ear with thin tubing. The benefits of open-canal fittings are often enhanced when used with digital feedback cancellation technology and directional microphones (see, e.g., Fabry 2006; Mueller & Ricketts 2006; Johnson, Ricketts, & Hornsby 2007).
CROS-Type Fittings
The CROS hearing aid was introduced for patients who have one normal ear and one substantially impaired ear that is unaidable (Harford & Barry 1965). The impaired ear might be unaidable because it is totally deaf or due to a medical condition that precludes the use of a hearing aid, such as chronic drainage. Even an ear with a reasonable amount of residual hearing sensitivity sometimes proves to be unaidable due to, for example, an extremely low speech recognition score. These patients must rely on one ear, which means that sounds originating from the impaired side will be received at a reduced level due to the head shadow. Depending on the communicative and other demands of their work and social environments, some individuals in this situation can function with little or no difficulty by relying on the remaining normal ear, but others experience a considerable disadvantage and need assistance to hear sounds originating from the impaired side. Imagine, for example, the disadvantage experienced by an executive who cannot hear the people sitting to her left at a business meeting, or a taxi driver with a deaf right ear who cannot hear his passengers.
The “good ear” does not have to be normal for CROS to be used. Instead, CROS can also be used if there is a high-frequency sensorineural hearing loss in the better ear (Harford & Dodds 1966), In fact, it appears that the chances of success with CROS are best when there is a mild to moderate hearing loss above 1500 Hz in the better ear, and are only minimal when it is normal (Valente, Valente, Enrietto, & Layton 2002). This arrangement works for two reasons. First, the open earmold facilitates amplification of the high frequencies without boosting the lows. Second, placing the microphone and receiver in separate ears avoids the acoustic feedback problem that is likely to occur when the microphone and receiver are able to communicate via an open earmold in the same ear. This arrangement minimizes feedback by increasing the distance between the microphone and receiver and taking advantage of the head shadow effect.
Another approach to unilateral hearing loss, often called transcranial CROS, involves directing stimuli from the deaf side to the hearing ear via bone-conduction using a conventional bone-conduction hearing aid, a powerful air-conduction hearing aid, or a BAHA device, worn on the deaf side (e.g., Valente et al 2002; Spitzer, et al 2002; Wazen, et al 2003; Hol, Kunst, Snik, & Cremers 2010).
The BICROS arrangement is considered when the patient has an unaidable poorer ear for any of the above-mentioned reasons, as well as a hearing loss that can benefit from a hearing aid in the better ear (Harford 1966). It is easiest to think of this arrangement as a regular hearing aid in the better ear, with its own microphone, receiver, and the appropriate kind of earmold (instead of an open CROS-type earmold), plus an additional “off-side” microphone that is located at the unaidable poorer ear on the other side of the head.
Reported outcomes with CROS and BICROS amplification have been rather mixed (e.g., Gelfand 1979; Cashman, Corbin, Riko, & Rossman 1984; Ericson, Svärd, Högset, Devert, & Ekström 1988; Hill, Marcus, Digges, Gillman, & Silverstein 2006; Lin et al 2006; Hol et al 2010; Taylor 2010). However, it is noteworthy that Williams, McArdle, and Chislom (2012) found that user performance and satisfaction improved when previously used BICROS instruments were replaced with newer ones employing state-of-the-art technology.
Prefitting Considerations
The first steps in the process of providing the patient with amplification include patient assessment (see Candidacy for Audiological Intervention, above), obtaining medical clearance, and providing the patient with initial counseling. The purpose of the medical clearance is to meet legal and regulatory requirements and to be sure no medical problems exist that would contraindicate a hearing aid or would create the need for special considerations. Otologic and related pathologies are discussed in Chapter 6. Initial counseling should address such issues as the reasons for amplification and other aural rehabilitation services that might be indicated; the kinds of instrument(s) that are appropriate; the reasons for preferring binaural amplification, if appropriate; and realistic goals and expectations regarding what hearing aids can and cannot do. Other issues often discussed at this point revolve around such practical matters as the costs involved, where the instrument can be purchased, the 30-day trial period, batteries, instrument warrantees, and insurance. The patient, his family, and other pertinent individuals (caregivers, etc.) should have a clear and realistic understanding of these issues so they can be active participants in the audiological management process, of which the hearing aids are an important part (ASHA 1998; AAA 2006a). Earmold impressions might be taken at this point or at a subsequent time, depending on the kinds of instruments being considered and who will be dispensing them. These issues are included in the early phases of both the ASHA (1998) and AAA (2006a) guidelines summarized in Table 16.3 and Table 16.4.
Table 16.3 Some features of the ASHA (1998) hearing aid guidelines for adults
Stage | Description |
Assessment | Determine nature and magnitude of hearing loss. Assess hearing aid/rehabilitation candidacy based on audiometric data, self-assessment protocols, etc. Consider patient’s unique circumstances (e.g., physical status, mental status, attitude, motivation, sociological status, communication status). |
Treatment planning | Audiologist, patient, and family/caregivers review findings to identify needs, arrive at rehabilitative goals, plan intervention strategies, understand treatment benefits/limitations/costs and how outcomes are assessed. |
Selection | Hearing aid(s) selected in terms of electro-acoustic (e.g., frequency gain, OSPL90, input-output function) and other (monaural/binaural; style and size, etc.) characteristics. |
Verification | ANSI S3.22 measurements (hearing aid electro-acoustics) and real-ear measurements (e.g., prescriptive targets, performance on patient). Physical and listening checks for physical fit, intermittencies, noisiness, etc. Determine whether audibility, comfort, and tolerance expectations are met. |
Orientation | Counseling on hearing aid use and care, realistic expectations, etc.; explore candidacy for hearing assistive technology, audiologic rehabilitation assessment, and further intervention. |
Validation | Assess intervention outcomes, reduction of disability, goals addressed using such measures as self-assessment tools, objective or subject’s measures of speech perception, other means. |
Instrument Selection/Evaluation/Fitting
The process of providing the patient with amplification has been described by various terms over the years, such as hearing aid evaluation, hearing aid consultation, hearing aid selection, and hearing aid fitting. The nature of the fitting process has evolved over the years and continues to do so, and no one method is applicable in all cases, let alone universally accepted. Contemporary approaches implicitly accept a concept that has traditionally been called selective amplification. This term originally meant that the amount of gain at each frequency should depend on the degree and configuration of the patient’s hearing loss. A more modern definition would say that the hearing aid’s electroacoustic characteristics should be chosen or adjusted so that they are most appropriate for the nature of the patient’s hearing loss. The selective amplification concept is so ingrained in modern clinical philosophy that the term itself is rarely used anymore. However, the student should be aware that this was not always the case. For example, the influential Harvard (Davis, Stevens, Nichols, et al 1947) and MedResCo (1947) reports concluded that most patients could be optimally fitted with a hearing aid that has a flat or slightly rising frequency response, so that an individualized selection process would be superfluous in all but unusual cases. The ensuing controversy lasted for decades (see, e.g., Levitt, Pickett, & Houde 1980). As already indicated, individualized hearing aid selection has proven to be the accepted approach, although numerous different methods have been proposed to accomplish this. These methods can be categorized into two broad groups, which we will call the comparative and prescriptive approaches.
Table 16.4 Some features of the AAA (2006a) Audiological Management Guidelines for Adults
Feature | Description |
Assessment and goal setting | Auditory assessment and diagnosis. Self-perception of communication needs using self-assessment instruments. Assess non-auditory needs. Set treatment goals. Determine medical referral/clearance needs. |
Technical aspects of treatment | Hearing aid selection Make sure hearing aids include specified features and meet quality standards before the fitting appointment. Hearing aid fitting and verification of comfortable fit, prescribed gain, OSPL90, etc., preferably using probe microphone and simulated real-ear methods, absence of occlusion effect or feedback problems, consideration of hearing assistive technology. |
Orientation | A significant other should be involved if possible. Provide device-related information. Discuss goals, expectations, wearing schedule, adjusting to hearing aid use in various settings, effects of various environments, listening strategies, speechreading, monaural versus binaural amplification, post-fitting issues. |
Counseling and follow-up | Provided to new hearing aid users; offered to experienced users. Primary communication partner should be included if possible. Discuss topics pertaining to hearing mechanism, hearing loss and effects including speech in noise, approaches and strategies for minimizing these issues, care and use of hearing aids; possibility of adjustment/acclimatization period before full benefits are apparent; realistic expectation; community resources. |
Assessing outcomes | Analogous to validation stage in ASHA (1996). Use of objective measures (e.g., speech recognition tests) and subjective measures (e.g., self-assessment instruments). Determine extent to which goals have been achieved, including effects on communication, activity and participation limitations, and quality-of-life issues. |
Comparative Hearing Aid Evaluations
The traditional hearing aid evaluation (HAE) used to involve preselecting several hearing aids that appeared to be appropriate for the patient on the basis of her electroacoustic specifications, and then comparing the patient’s performance with each of them using a variety of speech tests presented from loudspeakers. The procedure originally described by Carhart (1946) began with unaided measurements of SRT, the tolerance limit for speech, and a speech recognition score, followed by a series of tests with each of the hearing aids being compared. These tests began by finding the volume control setting where the patient judged speech presented at 40 dB HL to be comfortably loud. This was followed by finding SRTs and tolerance limits at the comfort and full-on gain settings, the signal-to-noise ratio that rendered speech at 50 dB HL barely audible, and speech recognition scores.
The extensive testing involved in the original Carhart method quickly led to abbreviated versions, generically known as comparative hearing aid evaluations or modified Carhart methods. In a typical evaluation, the patient would be tested with each hearing aid to obtain an SRT and speech recognition scores for words presented in quiet and/or against a background of noise. Other methods compared hearing aids based on quality and/or intelligibility judgments (e.g., Punch & Beck 1980; Punch & Parker 1981; Neuman, Levitt, Mills, & Schwander 1987; Cox & McDaniel 1989; Surr & Fabry 1991; Kuk 1994). The hearing aid with the best speech recognition score (or judgment rating) was chosen for the patient. This was often followed by a short trial period with the hearing aid. The patient then purchased an instrument of the same make and model, and would subsequently return to the clinic to determine whether the purchased instrument provided adequate performance.
Comparative HAEs have been abandoned in favor of prescriptive methods, principally because speech recognition tests could not distinguish among hearing aids adequately and reliably. For example, Walden, Schwartz, Williams, Holum-Hardegen, and Crowley (1983) found that the mean difference in speech recognition scores for words presented in noise was only 4.3% among acoustically similar hearing aids. In addition, only 4.5% of the score differences were large enough to be significant on an individual-patient basis.2 This was the same situation involved in a real-world comparative HAE, where the audiologist preselected several appropriate (hence similar) instruments to compare behaviorally on the patient. In other words, speech recognition scores were unable to distinguish among the instruments being compared. Moreover, the hearing aid with highest speech recognition score was not always the one judged best by a patient after a trial period of actual use.
The comparative HAE approach also had practical limitations: In-the-ear instruments do not lend themselves to comparative assessments because they are custom-ordered. Also, programmable instruments have multiple settings for different listening conditions, so that comparative assessments between instruments are simply unrealistic.
Prescriptive Hearing Aid Fitting
Contemporary hearing aid fittings involve prescriptive methods in which “formulas” are used to prescribe the amplification characteristics considered to be most appropriate for a patient. The very idea of using formulas to prescribe gain might seem strange. After all, we know from common experience that eyeglasses often restore “20-20” visual acuity. Consequently, it would seem that the amount of gain should be equal to the amount of hearing loss because this would restore 0 dB HL thresholds. This notion would be a great idea except for the fact that it is wrong. Equating gain with the amount of loss invariably results in too much amplification, subjecting the patient to excessive amounts of loudness and distortion, and placing the patient at risk for developing a noise-induced hearing loss. In fact, if we try to give patients too much gain, they often negate the intent by turning down the volume control, and may even reject amplification altogether. If the goal of amplification is not to restore normal hearing, then, alas, what is the goal? The answer is to provide the amount and configuration of gain that maximizes the audibility of conversational speech without making the amplified signal uncomfortably loud. Skinner (1988) described this gain configuration as the one that provides the patient with the best compromise between maximum speech intelligibility and acceptable sound quality.
Hearing aid prescription formulas are really explicitly described sets of rules used to determine the amounts and configurations of gain and output sound pressure level that are most desirable for the patient (the selection stage in Table 16.3 and the technical aspects section in Table 16.4). The American Academy of Audiology (AAA 2013) stressed the importance of providing audibility for speech over a wide range of input levels in the context of fitting infants and young children, but this goal is clearly desirable for older children and adults as well. Depending on which prescriptive method is used, the formula might use the patient’s thresholds, comfortable listening levels, and/or loudness discomfort levels as a function of frequency. This information might come from the patient’s audiogram, specially administered tests, or a combination of the two. Reviews of this topic and summaries of prescriptive formulas for adults and children are available in several sources (e.g., Skinner 1988; Mueller, Hawkins, & Northern 1992; PWG 1996; Traynor 1997; Valente 2002; Dillon 2012). The student should keep in mind that existing prescriptive approaches are revised and new methods are introduced from time to time, and that different prescriptive methods can recommend different gain configurations for the same patients.
The approaches described by Watson and Knudsen (1940) and by Lybarger (1944) might be considered the prototypes for prescriptive hearing aid fittings. Lybarger’s (1944) more widely known halfgain rule recommended setting the hearing aid’s gain to 50% of the patient’s hearing loss at each frequency (except 500 Hz, where 30% gain was used). On the other hand, the method recommended by Watson & Knudsen (1940) essentially involved setting the gain at each frequency so that the speech would approximate the equal-loudness-level curve that was most comfortable for the patient. Since then, many methods have been proposed that prescribe amplification characteristics based on unaided thresholds and/or suprathreshold measures, and also account for a wide range of considerations such as severe to profound hearing loss, conductive components, and the capabilities of modern programmable instruments (e.g., McCandless & Lyregaard 1983; Byrne & Dillon 1986; Libby 1986; Schwartz, Lyregaard, & Lundh 1988; Berger, Hagberg, & Rane 1988; Byrne, Parkinson, & Newall 1990; Killion 1996; Seewald et al 1997; Dillon 1999a,b, 2006; Scollie, Seewald, Cornelisse, et al 2005; Byrne, Dillon, Ching, Katsch, & Keidser 2001; Keidser, Dillon, Dyrlund, Carter, & Hartley 2011; Keidser, Dillon, Carter, & O’Brien 2012; Johnson 2013). Some prescriptive methods attempt to prescribe the gain configuration needed to place the long-term average speech spectrum at the listener’s MCL or approximately midway between the patient’s thresholds and loudness discomfort levels (e.g., Shapiro 1976, 1980; Bragg 1977; Cox 1988; Skinner 1988), or to levels 10 dB below the patient’s LDLs for most frequencies. Another protocol uses thresholds and loudness judgments, and attempts to enable the patient to experience soft sound levels as audible and soft, average sound levels as comfortable, and high sound levels as loud but not uncomfortably loud (e.g., Cox 1995; Valente & Van Vliet 1997).
In addition to the desired amount of gain as a function of frequency, it is also important to prescribe OSPL90 values that do not exceed the patient’s loudness discomfort levels. Output sound pressure levels can be prescribed based on loudness discomfort level measurements (using, e.g., narrow-band noises or warble tones) or employing various estimating procedures (e.g., Cox 1983, 1985; Hawkins, Walden, Montgomery, & Prosek 1987; PWG 1996; Cox, Alexander, Taylor, & Gray 1997; ASHA 1998; Dillon & Storey 1998; Storey, Dillon, Yeend, & Wigney 1998; Dillon 1999a; Keidser et al 2011, 2012; Johnson 2013).
The prescription may also be viewed in terms of targets. For example, the targeted frequency response expresses the amount of gain that we want delivered to the patient’s eardrum at each frequency. Similarly, maximum output is also prescribed in terms of target values for OSPL90 at the patient’s eardrum. The ideal hearing aid would be the one that actually provides these targeted values. To achieve this goal with BTE and body hearing aids, the clinician selects a stock instrument based on the information provided in the manufacturer’s specifications book. For ITE and canal aids, the audiologist places a customized order with the manufacturer, who then fabricates an instrument using electronic and acoustical methods that should provide the prescribed characteristics.
It is important to be aware that hearing aid manufacturers commonly use their own proprietary fitting protocols. However, the preferred practice is for audiologists to use prescriptive targets based on an appropriately chosen, well-documented prescriptive method. This is especially important with children (AAA 2013). Also, programmable hearing aids make it possible to use different prescriptions based on the listening situation. For example, children might switch between hearing aid settings based on the NAL-NL1 (Byrne et al 2001) and DSL[i/o] (Seewald et al 1997) prescriptions depending on whether the listening situation is quiet or noisy (Ching, Scollie, Dillon, et al 2010; Scollie, Ching, Seewald, et al 2010).
Verification
Verification of the hearing aid fitting is carried out to confirm that the instrument actually provides the prescribed amplification characteristics and to adjust these as needed, to ensure that it is working without problems (e.g., distortions, noisiness, intermittencies), and to be sure it fits properly and comfortably. (These issues are included in the section on the verification stage in Table 16.3 and in the technical aspects section in Table 16.4.)
That the instrument actually meets the intended electroacoustic characteristics selected by the audiologist is verified by testing the instrument in a hearing aid test box using the measurements described in the ANSI S3.22 (2009) standard and in place on the patient (in situ) using real-ear (probe-tube) measurements. These are essential measurements about the hearing aid itself, but they do not tell us what the instrument is doing for the patient. Hence, we must also test the hearing aid on the patient to verify that it is actually providing her with the intended performance. The preferred method for verifying hearing aid performance with respect to the patient involves real-ear or probe-tube measurements (PWG 1996; ASHA 1998; AAA 2006a). Real-ear to coupler differences (Chapter 15) should be employed with infants and young children (AAA 2013). It is often necessary for the audiologist to “fine-tune” the hearing aid’s characteristics to bring them as close as possible to the targeted values. This is done by making program adjustments for programmable instruments, or by adjusting the hearing aid’s internal settings for hearing aids that are not programmable.
Validation (Outcome Assessment)
In addition to verifying the adequacy of the fitting in terms of consistency with the prescribed characteristics, it is also necessary to validate the benefits afforded to the patient using appropriate outcomes instruments like those shown in Table 16.1 and Table 16.2 (e.g., Hawkins et al 1991; ASHA 1998; AAA 2006a, 2013), corresponding to the validation stage in Table 16.3 and the assessing outcomes section in Table 16.4. As the tables show, validation or outcome assessment may be accomplished in a variety of ways that might employ, for example, speech intelligibility tests, quality and/or intelligibility judgments, paired comparisons, or other assessment tools. Notice that speech intelligibility assessment is now used as one of several ways to validate the adequacy of a hearing aid fitting, whereas it was the criterion measure for selecting among instruments in the traditional HAE approach that was used in the past.
Post-Fitting Considerations
There is almost always a 30-day trial period during which the patient may elect to return the hearing aid. Although one should not lose sight of the consumer protection aspects of the trial period, it is probably more important for the audiologist and patient to consider it the time when the orientation stage (Table 16.3) of the hearing aid fitting process takes place. Consequently, the patient should be urged to return for consultation several times during this period, not just at the end; and it is a very good idea to schedule the first follow-up appointment before the patient leaves. Also, it cannot be overstressed that ongoing follow-ups on a regular basis are especially important with children (AAA 2013).
Once the hearing aid fitting has been judged acceptable, the patient and any significant others (family, parents, etc., depending on the specific circumstances) should be instructed about all aspects of the use and care of the instrument. The patient should know how to insert and remove the instrument, use its controls, clean and maintain it, replace batteries, etc. She should understand the goals of amplification, have realistic expectations for what hearing aids can and cannot do, and know the nature of effective listening strategies. These initial clinical counseling activities should also include the consideration of assistive devices and other aural rehabilitation services.
A word about batteries is in order before proceeding. The patient and others who handle the instruments should also be made aware that hearing aid batteries are to be treated as a dangerous poison. This is not an overstatement. Special attention should be given to battery safety (including the use of tamper-resistant battery compartments) with pediatric patients and others who require special supervision. Batteries should be inaccessible to at least younger children as well as to pets. Even adult patients should be counseled about battery safety and to avoid unsafe practices like using their lips as an extra set of hands while changing batteries. Grandparents and other patients without “baby proof” homes should be counseled about battery safely as well, particularly when children might be visiting.
Cochlear Implants
Some hearing losses are so extensive that the patient is unable to derive any appreciable benefit from even the most powerful hearing aids. Other types of sensory aids are needed for these patients. Cochlear implants attempt to provide the patient with information about sound by converting it into an electrical current, and then using this electrical signal to directly stimulate any remaining auditory nerve fibers the patient might still have.
Cochlear implants are composed of both internal components that are surgically installed and external devices worn outside the body, as illustrated in Fig. 16.1; and examples of contemporary instruments are shown in Fig. 16.2a–d. The external components include (1) a microphone that picks up sound and converts it into an electrical signal, (2) the speech processor that analyzes the sound and converts it into a code representing various aspects of the sound, and (3) a transmitter that sends the encoded information to the implanted device via an electromagnetic or radiofrequency signal. The surgically implanted components include (1) a receiver that picks up the signal from the external transmitter and (2) the electrical stimulator with its electrodes that are inserted into the cochlea. The electrodes send the encoded information to the remaining fibers of the auditory nerve in the form of an electrical signal. In addition, a ground electrode is placed somewhere outside the cochlea, such as in the middle ear. Most contemporary speech processors and microphones are housed in behind-the-ear units like post-auricular hearing aids, which are in turn connected to the external transmitter by a small wire, although other arrangements are available. For example, MED-EL’s RONDO instrument combines the speech processor and transmitter into one self-contained unit, and the Advanced Bionics Neptune is a “freestyle” speech processor that can be worn in various locations on the head or upper body.
Fig. 16.1 Representation of a cochlear implant in a patient, showing external and implanted components. (Picture courtesy of the National Institutes of Health.)
Fig. 16.2 Examples of cochlear and brainstem implants. (a) Advanced Bionics cochlear implant systems (Neptune freestyle processor at right). (b) MED-EL cochlear implant systems (RONDO self-contained unit at left). (c) Cochlear Americas cochlear implant system. (d) Cochlear Americas hybrid system. (e) MED-EL auditory brainstem implant. (Frame a provided courtesy of Advanced Bionics. Frames b and e provided courtesy of MED-EL. Frames c and d images provided courtesy of Cochlear Americas. © 2014 Cochlear Americas. Used with permission.)
To take advantage of tonotopic organization, cochlear implants have multiple channels to separate the spectrum into frequency bands and multiple electrodes laid out along the cochlea.3 Depending on the manufacturer, cochlear implants typically have 12 (MED-EL), 16 (Advanced Bionics), or 22 (Cochlear) active electrodes, in addition to ground electrodes. In each case, the electrodes closer to the base of the cochlea represent higher frequencies and those toward the apex represent lower frequencies. In other words, the electrode layout follows the pattern of how frequencies are arranged by place along the cochlea. The cochlear implant’s microprocessor analyzes the incoming sound in terms of various parameters, which are coded into combinations of tiny currents delivered to the electrodes laid out in the cochlea. Various coding strategies are used to represent the speech signal. For example, the patterns of stimulation from the array of electrodes might represent the spectral peaks of the incoming sound, speech features like formants and fricative noises, or the waveform of the incoming signal. Some approaches activate one electrode at a time in rapid succession, whereas other systems activate several electrodes simultaneously. Information about the intensity of a sound is provided by the strength of the current coming from the electrodes.
Cochlear implant technology and clinical applications have been developing rapidly, and this process will continue well into the future. For example, there are ongoing changes in such areas as instrumentation, processing strategies, and candidacy, as well as binaural cochlear implants (combining cochlear implantation with acoustical amplification) and auditory brainstem implants.
Patients can be provided with a combination of both electrical and acoustical stimulation in two basic ways, which might be used alone or in various combinations. Bimodal stimulation means that the patient is provided with a combination of both electrical and acoustical stimulation. Combined electric and acoustic stimulation (EAS) can be accomplished in two basic ways, which might be used alone or in various combinations. One EAS approach is to use a cochlear implant (electrical stimulation) in one ear and a hearing aid (acoustic stimulation) in the other ear, and consequently also provides binaural stimulation. The second bimodal approach involves a hybrid cochlear implant, which combines a cochlear implant and a hearing aid in one device that provides both sound and electric stimulation to the same ear. To make this possible, the hybrid device has a shorter electrode array that does not extend as far up the cochlea as regular implants do, which allows the more apical parts of the cochlea to remain intact so that they can respond to sound (e.g., Gantz & Turner 2003, 2004). Overall, the outcomes achieved with binaural and bimodal cochlear implants have been quite successful (e.g., Schafer, Amlani, Seibold, & Shattuck 2007; Wilson & Dorman 2008; Schafer, Amlani, Paiva, Nozari, & Verret 2011)
Auditory brainstem implants deliver the electrical stimulation to the brainstem. The electrode array is arranged on a paddle-shaped silicon holder as shown in Fig. 16.2e, and it is usually placed on the cochlear nucleus. Auditory brainstem implants are used when cochlear implants are not possible due to, for example, neurofibromatosis type 2 (NF2) and non-tumor etiologies such as congenital aplasia of the eighth nerve and traumatic injuries. Unlike in Europe, auditory brainstem implants in the United States were limited to adults until clinical trials in children were approved by the Food and Drug Administration in 2013. Early speech perception results with brainstem implants were poor in patients with NF2 compared with those with non-tumor causes, but improved performance in NF2 patients has been reported more recently (e.g., Otto, Brackmann, Hitselberger, Shannon, & Kuchta 2002; Colletti & Shannon 2005; Behr, Müller, Shehata-Dieler, et al 2007; Colletti, Shannon, Carner, Veronese, & Colletti 2009; Skarzyński, Behr, Lorens, Podskarbi-Fayette, & Kochanek 2009; Colletti, Shannon, & Colletti 2012).
Candidacy for Cochlear Implants
A patient’s candidacy for a cochlear implant depends on many factors (e.g., Chute & Nevins 2000; ASHA 2004; Waltzman & Roland 2007; AAA 2013). Cochlear implants are currently approved by the Food and Drug Administration (FDA) beginning at 12 months old for children with bilateral profound sensorineural hearing losses [generally defined as a pure tone average (PTA) ≥ 90 dB HL], and beginning at 18 months of age for those with bilateral severe to profound sensorineural losses (PTA ≥ 70 dB HL). However, cochlear implants can also be used for younger patients and a wider range of losses on an “off-label” basis or in clinical trials; and good results have been obtained for those implanted before 12 months of age (e.g., James & Papsin 2004; Nicholas & Geers 2004; Waltzman 2005; Waltzman & Roland 2005; Lesinski-Schiedat et al 2006; Dettman, Pinder, Briggs, Dowell, & Leigh 2007; Tait, DeRaeve & Nikolopoulos 2007; Ching et al 2013). In addition, children who are candidates for a cochlear implant should be considered for bimodal stimulation (implant in one ear and hearing aid in the other) when there is any residual hearing in the opposite ear (AAA 2013).
Speech perception assessment is a fundamental aspect of cochlear implant use from candidacy decisions through evaluating the effects of intervention. Many speech perception batteries have been developed over the years (e.g., Owens, Kessler, Telleen, & Schubert 1981; Owens, Kessler, Raggio, & Schubert 1985) along with some other selected tests. Other batteries and tests are also available (e.g., Tyler, Preece, & Lowder 1983; Osberger et al 1991; Beiter & Brimacombe 1993; Nilsson, McCaw, & Soli 1996; MSTB 2011), and it is possible to pick and choose among them to formulate the optimum mix for a particular clinical situation.
Speech perception testing for cochlear implants has evolved as the very limited benefits of early single-channel devices have given way to the great potential offered by modern multiple-channel devices and advanced speech processing strategies. To get a flavor for the range of instruments that may be used, we will briefly examine the revised Minimal Auditory Capabilities (MAC) battery, the Monosyllable-Trochee-Spondee test, and the current version of the Minimum Speech Test Battery (MSTB 2011).
The revised Minimal Auditory Capabilities (MAC) Battery (Owens et al 1985) exemplified the assessment approaches that were used in the early days of cochlear implantation. As shown in Table 16.5, it included 13 auditory-only speech tests arranged in order from easiest to hardest, followed by a visual enhancement test, which also involved lipreading. Several of the tests should be familiar from Chapter 8. Notice that the MAC battery was designed to examine a very wide range of speech perception abilities from simple discriminations among spondee words through open-set monosyllabic speech recognition. In addition, the last test (visual enhancement) was included to ascertain whether the patient’s lipreading performance was improved by the addition of auditory cues.
Table 16.5 Description of the subtests of the revised MAC batterya
Test name | Description |
Spondee Same/Different | Discrimination task in which the patient must indicate whether two spondee words are the same or different. |
Four-Choice Spondee | Closed set recognition test in which the patient must identify a spondaic test word from four alternatives. |
Noise/Voice | Patient’s task must determine whether sounds are noises or the human voice. The stimuli are noises with different spectra and temporal-intensity envelopes (i.e., how intensity varies over time) and sentences spoken by different talkers. |
Final Consonants | Closed-set word recognition task in which the patient must identify a monosyllabic test word from four alternatives that have different final consonants (e.g., “rid/rip/rib/ridge”). |
Accent | Perception of prosody. Four-word phrases are presented in which one word is stressed or accented (e.g., “can you FIX it?”). Patient must select it from four closed-set alternatives. |
Everyday Sounds | Open-set task in which the patient must identify familiar sounds (a doorbell, people talking, etc.). |
Initial Consonants | Closed-set word recognition task in which the patient must identify a monosyllabic test word from four alternatives that have different initial consonants (e.g., “din/bin/fin/gin”). |
Question/Statement | Prosody perception task in which the patient must identify phrases as questions (rising inflection) or statements (falling inflection). |
Vowels | Closed-set word recognition task in which the patient must identify a monosyllabic test word from four alternatives that have different medial vowels or diphthongs (e.g., “fool/full/fall/foul”). |
CID Everyday Sentences | Open-set sentence recognition test scored based on key words correctly repeated. |
Spondee Recognition | Open-set test in which the patient is asked to repeat each of 25 spondaic words. Half credit is given when one syllable is correct. |
Words In Context | Probability-High SPIN sentences are presented. Originally scored based on last word, but scoring was subsequently changed to sentence recognition based on correctly repeated key words. (A key word is considered right as long as its root is correctly identified.) |
Monosyllabic Words | Standard monosyllabic word recognition using NU-6 materials. |
Visual Enhancement | CID Everyday Sentences (scored by key words) presented (a) visually only (unaided lipreading) and (b) by lipreading plus amplified sound (aided lipreading). Both aided and unaided lipreading are tested to ascertain whether the patient’s lipreading performance is enhanced by the auditory cues. |
aBased on Owens, Kessler, Raggio, and Schubert (1985).
The Minimum Speech Test Battery (MSTB) is a contemporary approach for adult cochlear implant patients (MSTB 2011). The MSTB includes the Peterson and Lehiste (1962) CNC word recognition test, given in quiet; the AzBio Sentences (Spahr & Dorman 2004; Spahr, Dorman, Litvak, et al 2012), administered both in quiet and against a background of multi-talker babble; and the Bench-Koval-Bamford Speech-In-Noise (BKB-SIN) test (Etymotic Research 2005). Thus, it provides assessments of the patient’s speech perception skills in terms of quiet word recognition, sentence recognition in quiet and noise, and signal-to-noise ratio for sentences. An earlier version of the MSTB (Nilsson, McCaw, & Soli 1996) used the Hearing-in-Noise-Test (HINT) sentences (Nilsson, Soli, & Sullivan 1994); however, they were found to be too easy (Gifford, Shallop, & Peterson 2008) and were thus replaced with the more challenging AzBio sentences. Comparing the MAC battery of the 1980s, which contained several very easy tasks, to the much more demanding focus of the current MSTB illustrates how improved performance with cochlear implants has developed over the years.
When dealing with very limited speech recognition ability, especially in young children, it often becomes important to know whether the patient can at least recognize differences in the temporal or rhythmic patterns of speech. This type of information is provided by the Monosyllable-Trochee-Spondee (MTS) test (Erber & Alencewicz 1976), which is often used with cochlear implant batteries or included in them. Recall from Chapter 12 that the MTS uses three kinds of stimulus words that differ with respect to the number of syllables and the stress pattern, including (1) monosyllabic words; (2) spondaic words, which have two equally stressed syllables (e.g., “baseball”); and (3) trochaic words, which have two unequally stressed syllables (e.g., “button”). The test words are presented to the child, who responds by pointing to corresponding pictures. Examining the relationships between the stimulus words and responses reveals whether the child can correctly identify words with different levels of difficulty, and also whether she is able to take advantage of temporal and/or stress patterns in speech.
Typical performance requirements for cochlear implant candidates are (a) ≤ 50% on open-set sentence recognition tests for adults, (b) ≤ 30% on appropriate word recognition tests for children over ~ 24 months of age, and/or (c) a lack of auditory skill development in those too young for speech reception testing (e.g., those up to 24 months old). In the latter case, auditory skill development is assessed using age-appropriate instruments such as the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS; Zimmerman-Phillips, Robbins, & Osberger 2000) or the Meaningful Auditory Integration Scale (MAIS; Robbins, Renshaw, & Berry 1991). Candidates for cochlear implants should typically be receiving only minimal if any benefit from amplification. Implant decisions for infants should include a trial period with hearing aids, if possible (AAA 2013).
Several medical and surgical criteria must be met for cochlear implantation (e.g., Gray 1991; Chute & Nevins 2000; Waltzman & Shapiro 2000). There must be working auditory nerve fibers that can be stimulated by the implant and the existence of the relevant cochlear anatomy. These criteria are met by using high-resolution computerized tomography (CT) scans to rule out such structural abnormalities as cochlear agenesis or dysplagia, or the absence of an auditory nerve. The viability of the remaining auditory nerve fibers can be tested by seeing if an auditory sensation is produced by electrical stimulation of the promontory or round window niche. This procedure involves inserting a needle electrode through the tympanic membrane. The integrity of the existing auditory nerve fibers can also be demonstrated by the presence of low-frequency audiometric thresh-olds provided they are actually heard as opposed to being tactile responses. In addition to these factors, the patient’s general health must be good enough to undergo surgery and general anesthesia.
In addition, the potential complications of cochlear implantation should be discussed with the patient and/or parents. Even though cochlear implant surgery has a relatively low incidence of major complications, they can and do occur (see, e.g., Papsin & Gordon 2007). Bacterial meningitis is a rare but serious potential complication that also needs to be addressed (CDC 2007; FDA 2007). As a result, it has been recommended that (a) children with cochlear implants receive a complete course of meningitis vaccinations, (b) patients and those dealing with them should be able to identify the signs of meningitis so that treatment can be undertaken as early as possible, (c) otitis media be promptly diagnosed and treated, and (d) the prophylactic use of antibiotics be considered during cochlear implant surgery (CDC 2007; FDA 2007).
Realistic expectations play an especially important role in cochlear implant candidacy. The patient and/or parents need to know about the variability of the results and the importance of a strong commitment to training after the device is installed.
After the surgery is completed and healing has occurred, the next step is to program the device for optimal use by the individual patient. The programming process is also called mapping. Programming involves adjusting the electrical currents produced by the electrodes to yield appropriate thresholds and comfortable listening levels, and allocating frequency bandwidths to the various electrodes. Children often require a period of preprogramming training to teach them the tasks involved in programming the cochlear implant.
Experience with the cochlear implant is essential after it has been installed and programmed. In general, patients should expect to need a minimum of 3 months, and often considerably longer, before they can anticipate reaching the stage where performance levels off (Wilson & Dorman2008). A period of training is highly desirable and often essential; and it can last anywhere from a month or so for adults with adventitious deafness to an extensive and comprehensive long-term intervention program for children with prelingual deafness.
Clinical Outcomes with Cochlear Implants
Let us now look at some of the benefits provided by cochlear implants. Notice that, in general, success with cochlear implants occurs when (1) the onset of the hearing impairment is postlingual rather than prelingual, (2) the duration of deafness is shorter rather than longer, and (3) implantation occurs sooner rather than later.
Although performance varies widely, contemporary cochlear implants provide many adults with postlingual deafness considerable speech perception benefits, including open-set recognition performance, which tends to improve with time over the course of about 3 months or so (e.g., Hollow et al 1995; Staller, Menapace, Domico, et al 1997; Geier, Barker, Fisher, & Opie 1999; Rubinstein, Parkinson, Tyler, & Gantz 1999; Waltzman, Cohen, & Roland 1999; Osberger & Fisher 2000; Tyler, Dunn, Witt, & Noble 2007). More or less similar benefits appear to be obtained by elderly patients (e.g., Labadie, Carrasco, Gilmer, & Pillsbury 2000; Francis, Chee, Yeagle, Cheng, & Niparko 2002; Pasanisi, Bacciu, Vincenti, et al 2003; Leung, Wang, Yeagle, et al 2005).
Cochlear implantation is appropriate for children with prelingual hearing impairments as well as those who have postlingual losses (due to, e.g., meningitis). Keeping in mind that performance varies widely, the benefits provided to children by cochlear implants are quite impressive in such areas as speech perception (including open-set speech recognition), speech production, language development, and literacy and academic achievement (e.g., Cohen, Waltzman, Roland, Staller, & Hoffman 1999; Eisenberg, Martinez, Sennaroglu, & Osberger 2000; Geers, Nicholas, Tye-Murray, et al 2000; Meyer & Svirsky 2000; Papsin, Gysin, Picton, Nedzelski, & Harrison 2000; Tyler et al 2000; Teoh, Pisoni, & Miyamoto 2004; Blamey, Sarant, Paatsch, et al 2001; Geers, Nicholas & Sedey 2003; Stacey, Fortnum, Barton, & Summerfield 2006; Geers, Tobey, & Moog 2011). In addition, children and adolescents with cochlear implants provided similar responses to those of their normal-hearing peers on a health-related quality-of-life questionnaire (Loy, Warner-Czyz, Tong, Tobey, & Roland 2010).
The outcomes just described are indeed impressive, and the performance of children with cochlear implants far exceeds what would have been accomplished without them. However, it is essential to reiterate that performance varies widely. It is equally important to realize that although children with cochlear implants often perform similarly to their normal-hearing peers, they still do lag behind them, and that the size of the gap can be on the order of one standard deviation (see, e.g., Geers, Moog, Biedenstein, Brenner, & Hayes 2009; Geers & Hayes 2011; Geers & Sedey 2011; Most, Shina-August, & Meilijon 2010; Ching et al 2013; Guo, Spencer, & Tomblin 2013; Nittrouer, Caldwell, Lowenstein, et al 2013; Nittrouer 2015). Clearly, there is still more to be done.
Performance benefits have been shown to improve with earlier implantation (Fryauf-Bertschy, Tyler, Kelsay, Gantz, & Woodworth 1997; Nicholas & Geers 2004; McConkey Robbins, Koch, Osberger, Zimmerman-Phillips, & Kishon-Rabin 2004; Svirsky, Teoh, & Neuburger 2004; Lesinski-Schiedat et al 2006; Ching et al 2013), the effects of which are enhanced by early identification made possible by universal infant screening programs (Chapter 13) and the trend toward implantation during the first year of life, as noted above. In addition, unlike the leveling off of performance after several months that occurs in adults, children’s performance continues to improve with increasing cochlear implant usage over time (e.g., Miyamoto et al 1992; Fryauf-Bertschy et al 1997; Tyler et al 2000).
Until recently, cochlear implants had been placed in just one ear. However, just as we saw that binaural amplification provides advantages over monaural hearing aids, recent studies have shown beneficial outcomes with binaural cochlear implants in both children and adults (e.g., Kühn-Inacker, Shehata-Dieler, Müller, & Helms 2004; Litovsky, Parkinson, Arcaroli, et al 2004; Litovsky, Johnstone, Godar, et al 2006; Verschuur, Lutman, Ramsden, Greenham, & O’Driscoll 2005; Galvin, Mok, & Dowell 2007; Tyler, Dunn, Witt, & Noble 2007). As already pointed out, unilaterally implanted patients should be considered for a hearing aid in the un-implanted ear if it has any residual hearing (AAA 2013).
Tactile Aids
Tactile aids are another class of sensory aids for the deaf that use tactile sensations to substitute for auditory ones. With tactile aids, sounds are picked up by a microphone, analyzed and encoded by a processor, and then transmitted to the skin using vibrators (vibrotactile stimulators) or electrical (electrotactile or electrocutaneous) stimulators. The stimulators have been worn at various sites, such as the wrist, arm, chest, abdomen, waist, and thighs.
Single-channel tactile devices typically use the incoming speech signal to modulate the vibration, and represent sound amplitude as the strength of the vibration. Multiple-channel tactile aids use an array of vibrators laid out on the skin. For example, several stimulators may be arranged in a line, with frequency represented by which vibrators are active, and intensity coded as the strength of the vibration. More complicated representations use a matrix of stimulators in columns and rows, so that the vibratory pattern represents the spectrum of an incoming sound as a graph “drawn” on the skin.
Tactile aids have been studied in some detail (e.g., Sherrick 1984; Hnath-Chisolm & Kishon-Rabin 1988; Cowan et al 1990; Carney et al 1993; Osberger, Maso, & Sam 1993; Weisenberger & Percy 1995; Kishon-Rabin, Boothroyd, & Hanin 1996; Kishon-Rabin, Haras, & Bergman 1997; Sehgal, Kirk, Svirsky, & Miyamoto 1998; Galvin et al 1999; Plant, Gnosspelius, & Levitt 2000; Reed & Delhorne 2003). Although tactile aids are much cheaper than cochlear implants and do not involve surgery, considerable training and practice are needed to make optimal use of the tactual signals. While tactile aids do not enable open-set word recognition for tactual-only stimulation, they do provide the important benefit of lipreading enhancement. In spite of their limitations, one should remember that tactile aids can be desirable for patients who are not candidates for cochlear implants or choose not to have one, and also can be beneficial during the period prior to receiving an implant, for preimplantation training, etc.
Room Acoustics
In spite of the benefits provided by their personal hearing aids, hearing-impaired children still have considerable problems communicating in classrooms because of noise and reverberation. As we will see in the next section, approaches known as hearing assistance technologies are used to minimize these effects and thereby present an optimal signal to each hearing-impaired child (or adult) in the room. Even though we are focusing on the hearing-impaired child trying to listen to the teacher in a classroom, the same issues apply to all hearing-impaired persons in all noisy and/or reverberant environments, such as theaters, houses of worship, auditoriums, and lecture halls. In fact, hearing assistance technologies are beneficial for a wide range of individuals who may have normal or near normal hearing sensitivity, such as those with auditory processing disorders, developmental disabilities, language and/or learning disorders, and auditory neuropathy spectrum disorder, as well as those operating in their non-native language (e.g., ASHA 2005b; AAA 2007).
Room acoustics have a considerable impact on the effectiveness and quality of communication, and the importance of hearing in all facets of education makes classroom acoustics in particular an area of major concern (e.g., ASHA 2005a,b; Crandell, Smaldino, & Flexer 1997, 2005; Palmer 1997; Rosenberg, Blake-Rahter, Heavner, et al 1999; Crandell & Smaldino 2000; ANSI 2010a,b; Seep et al 2003; Nelson, Soli, & Seltz 2003).
Classroom noise comes from a variety of sources inside and outside the room, with which we are all familiar from common experience. Ambient noise levels vary among public school classrooms, but are typically as high as ~ 60 dB or more (Ross & Giolas 1971; Crandell et al 1997, Rosenberg et al 1999; Crandell & Smaldino 2000; Knecht, Nelson, Whitelaw, & Feth 2002). For example, Knecht et al (2002) found that noise levels in elementary school classrooms ranged from 34.4 to 65.9 dBA; averaging 39.8 dBA in rooms where the heating, ventilation, and air-conditioning (HVAC) systems were off, and 49.7 dBA in rooms where the HVAC systems were on. In addition, Hodgson, Rempel, and Kennedy (1999) found that university classrooms and lecture halls had an average noise level of 44.4 dBA.
Particularly important for speech understanding is the relationship between the levels of the teacher’s speech and the noise at the child’s ears (or at the microphones of her hearing aids/cochlear implants). Recall that this relationship is a signal-to-noise ratio (SNR). The SNR is often called the message-to-competition ratio (MCR) when the noise is a competing speech signal, or the speech-to-babble ratio (SBR) when the noise is a babble composed of several voices. Positive SNRs mean that the level of the signal (the speech) is greater than that of the noise, and negative SNRs indicate that the noise is stronger than the speech. For example, a +6 dB SNR means the signal is 6 dB greater than the noise, and a –6 dB SNR means the noise is 6 dB higher than the speech. A 0 dB SNR means the levels of the signal and noise are the same. Fig. 16.3 illustrates these relationships as well as the manner in which SNR deteriorates with distance from the talker. Notice that the SNR falls from +18 dB immediately in front of the teacher’s lips (at 1 foot) to 0 dB at a distance of just 8 feet, and to –6 dB when the child is 16 feet away.
Hearing-impaired persons need higher SNRs than normal individuals to achieve similar levels of performance while listening to speech in noise (Dubno, Dirks, & Morgan 1984; Gelfand, Ross, & Miller 1988). Hearing-impaired children require SNRs of at least +10 to +20 dB for effective classroom performance (Gengel 1971; Finitzo-Hieber & Tillman 1978). Yet typical schoolroom SNRs are often only +1 to +6 dB (Finitzo 1988). In fact, even children who have minimal amounts of sensorineural hearing loss (with pure tone averages between 15 and 30 dB HL) have significantly poorer speech recognition than their normal-hearing counterparts, and the disadvantage becomes greater as the SNR decreases (Crandell 1993), as illustrated in Fig. 16.4.
Reverberation is the term used to describe the reflections of sounds from the walls, floors, ceilings, and other hard surfaces in the room. This multiplicity of reflections is perceived as a prolongation of the duration of a sound in a room. It can be heard by sharply clapping one’s hands once in a classroom, and noticing that the resulting sound lingers after the initial sound. Reverberation is measured in terms of reverberation time (RT), which is the duration of the reflections. Specifically, RT is how long it takes for the reflected sounds to fall to a level that is 60 dB less than the original sound. Typical classroom reverberation times range from ~ 0.2 to roughly 1.3 seconds (Crandell & Smaldino 2000; Knecht et al 2002). Speech recognition is adversely affected by reverberation because the reflections mask the direct sound and also because some speech cues are distorted by their prolongation (Nabelek & Robinette 1978; Gelfand & Silman 1979; Gelfand 2010). Speech perception worsens as reverberation time gets longer and when reverberation is mixed with noise, and the effect is greater for people with hearing loss than it is for those with normal hearing (Gelfand & Hochberg 1976; Finitzo-Hieber & Tillman 1978; Nabelek & Robinette 1978; Yacullo & Hawkins 1987; Crandell & Smaldino 2000). Moreover, it appears that children do not attain adult levels of consonant recognition in noise or reverberation until their early teens, and may not attain adult levels of performance in noise-plus-reverberation until their late teens (Johnson 2000).
Fig. 16.3 Speech level and signal-to-noise ratio (SNR) at various distances from the talker’s lips (idealized).