Overview
The modern cochlear implant is a device designed to convert environmental sound into electrical impulses that are delivered along a multiple electrode array situated in close proximity to the cochlear (auditory) nerve. Since the distal cochlear nerve fibers are arranged along the tonotopic, longitudinal axis of the cochlea lumen, application of discrete electrical stimulation at progressively deeper stations results in variably lower, psychophysical frequency percepts. As this paradigm does not necessarily rely upon hair cell transduction mechanisms, it is ideal for patients with significant hair cell loss. Since most patients with sensorineural hearing loss maintain some degree of functional auditory nerve fibers, these devices are particularly well adapted for most etiologies of hearing loss. Given the need for surgery and the effectiveness of hearing aids for lesser degrees of hearing impairment, the cochlear implant has been reserved for patients with significant degrees of hearing loss that are not easily rehabilitated with conventional amplification.
Outcomes for cochlear implantation can be defined in terms of sound awareness, enhancement of lipreading, improvements in speech perception, quality of life, and cost-effectiveness. Broader outcomes for adults include improvements in socialization, employment opportunities, income, and overall health and well-being (
1,
2,
3,
4). For children, extended outcomes include speech and oral language production, educational setting and achievement, the need for specialized services, and safety (
5,
6,
7). Recently, expanding indications for implantation and combining the electrical stimulation of a cochlear implant with native acoustic stimulation are broadening the application of this technology.
Historical Aspects
The first electrical stimulation of the inner ear probably was performed by Count Alessandro Volta at the end of the 18th century when he placed two metal rods in his ears and connected them to the terminal of 30 or 40 electrolytic cells (˜50 V). He reported the sensation of “une secousse dans la tete” or a blow on the head followed by a sound like “the boiling of a viscid liquid” (
8). Djourno and Eyries are credited with the first intralabyrinthine implantation of an electrical stimulating prosthesis. This was placed through a labyrinthine fistula in a patient with chronic otitis media and cholesteatoma in Paris in 1957. Djourno was a French neurophysiologist and Eyries an established otologist. Upon stimulation, the patient described highfrequency sounds that resembled the “roulette wheel of a casino” and “crickets.” Their patient was able to discern the words “pap,” “mamm,” and “allo” (
9,
10,
11).
In 1957, a patient in Los Angeles, CA, brought William House, MD, a news article detailing the apparent success of Djourno and Eyries. By 1960 and 1961, House with his colleagues Doyle and Doyle experimented with electrical stimulation of the inner ear. House performed promontory and vestibule stimulation of patients undergoing stapedectomy surgery (
12). Using a square-wave generator, he noted that patients could detect auditory percepts from electrically delivered currents better in the perilymph of the vestibule than on the promontory. Direct stimulation of the perilymph did not cause dizziness or facial stimulation above 30 Hz. In the 1960s and 1970s, numerous other groups were exploring the field of electrical stimulation of the inner ear, primarily for the purpose of hearing. Teams including F. Blair Simmons and Robert White (Stanford University); Donald Eddington (University of Utah); Robin Michelson, Michael Merzinich, and Robert Schindler (University of California at San Francisco); and Claude-Henri Chouard (France) and Graeme Clark (University of Melbourne) were most active. In contrast with House, these efforts were mostly directed at developing a multichannel electrode CI (
13,
14,
15,
16,
17,
18).
Blair Simmons, MD, with the assistance of Epley at Stanford (1964) implanted six electrodes in the modiolus portion of the eighth nerve near the basal cochlear fibers under local anesthesia (
19). Electrical stimulation resulted in some auditory percepts. Following these initial experiments, House and Doyle implanted two adult patients with single gold electrodes for short-term stimulation of hearing. One additional patient received a 5-electrode device. All three of these devices were later removed because of compatibility issues (
12). Specifically, the wires were tracked through the skin and resulted in local wound infection and irritation.
At the University of Paris as well as the Universities of Utah and California at San Francisco (UCSF), percutaneous connectors attached to intracochlear electrode arrays allowed for stimulation to be carried in a variety of ways ultimately allowing for great advances in the development of signal processing strategies (
14,
15,
17). Ultimately, however, similar to the experiences of House and Simmons, these devices required removal because of local infection. Importantly, it was clear that the percutaneous connection was unsustainable and that an alternative method for energy and data transfer was needed.
House later teamed with engineer Jack Urban to produce the first wearable, “take home” cochlear implant, implanted in Chuck Graser in 1972 (
12,
20). This was possible because of the use of newer, biocompatible materials developed in the pacemaker industry. His initial device was a five-wire electrode grounded through the footplate of the stapes. He was ultimately fitted with a speech processor/stimulator that had a single electrode that delivered a 16-KHz sinusoid carrier wave. While this implant as well as the one used by Michelson (
21) at UCSF failed to produce open-set speech perception, these devices proved that simple transcutaneous, inductive coupling was possible and that a percutaneous connection was not needed (
21).
Dobelle et al. (
22) later proposed a single radiofrequency (RF) link for both data and power transfer. In 1974, the University of Melbourne produced an electromagnetic dual link for data and power transfer that was highly efficient. This method allowed for transcutaneous, efficient transfer of power and data and is currently used in the Food and Drug Administration (FDA)-approved systems that are clinically available today in the United States (US) (
23).
Creating a hermetically sealed device was a major challenge for early investigators. Graeme Clark and colleagues in Melbourne (1974) melted glass on to wires that exited a Kovar steel container that housed the circuitry (
24). This was unsuccessful as fluid leakage persisted. In the pacemaker industry, epoxy resin was also unsuccessful. K. Kratochivil at Telectronics, a pacemaker company in Australia, discovered that when a blend of ceramics was sintered, it would bond to both wires and the metallic container to produce an impermeable seal. For Cochlear Corp in Australia, this technology when used in combination with a titanium package for strength produced the hermetically sealed device that is in use today. However, this construct required moving the data transmission antenna to a remote site, creating an elongated device with a susceptible antenna connection (
24). After implanting their initial multichannel prototype device in 1978, great strides were made in the 1980s demonstrating the superiority of these place-coding devices over the single-channel implants of House (
25,
26,
27,
28,
29). From this time forward, place coding using multiple electrodes situated longitudinally along the course of scala tympani to stimulate the tonotopic organization of the cochlear nerve became the preferred approach (
30,
31,
32).
From Clark’s work arose Cochlear Corp, the largest manufacturer of cochlear implants in the world today. In 1984, the Australian multichannel cochlear implant was introduced into the market and approved for adult usage by the FDA. In 1990, the criterion was extended to children older than 2 years of age. Over the next decade, the FDA-approved minimum criterion for age of implantation in children was lowered to 12 months.
With the help of Ingeborg and Erwin Hochmair in Vienna, Austria, Kurt Burian implanted a multichannel device in 1977. From this effort grew the company MED-EL corporation. In California, the work at UCSF by Michelson, Merzinich, and Schindler progressed to form Advanced Bionics Corporation (Sylmar, California, USA), the only US-based cochlear implant manufacturer. For both Advanced Bionics and MED-EL Corporations, a ceramic housing was chosen so the antenna and electronics were included in the same package. Unfortunately, ceramic was more brittle and susceptible to cracks in response to external trauma. The welding of ceramic to the metal header also created a relative weak point for the hermetic seal. Over time, the ceramic construct of these devices has given way to the silastic-titanium devices that are in use today by all three implant manufacturers producing FDA-approved CIs in the US.
Over the last 30 years, electrode and speech processor developments have produced more effective stimulation strategies associated with successively higher performance levels. Further miniaturization of components has resulted in small, behind-the-ear speech processors and very thin, atraumatic electrodes that preserve intracochlear structures.
In the early 2000s, bilateral cochlear implantation as well as implantation with preservation of significant acoustic residual hearing has broadened the indications for these devices. Moreover, combining acoustic stimulation with that of the electrical signal provided by the cochlear implant has begun to produce results for patients that had previously not been considered possible (
33,
34,
35,
36). Today, sound localization, hearing in noise, and even music appreciation are becoming possible.
In 2005, a totally implantable cochlear implant was developed in Sydney and implanted in Melbourne, Australia, as a part of a research project conducted by Cochlear Ltd and the University of Melbourne (
37). This was the first cochlear implant system capable of functioning for sustained periods with no external components. Because this device used a subcutaneous microphone, significant attenuation of the signal and interference with bodily noise limited wide-scale application. Nonetheless, the stage has now been set for a fully implantable cochlear implant. With further advances, it seems probable that one day, normal hearing might be restored through combinations of technology without the need for visible external hardware.
In the 1960s and 1970s, considerable opposition arose within the scientific community regarding the possibility of speech understanding by patients with cochlear implants. Specifically, “auditory physiologists and histopathologists dismissed these investigations as misguided attempts by surgeons—who know little about auditory neuroscience to stimulate nerves that were already dead” (
26). Moreover, the deaf community identified cochlear implants as an unacceptable intervention that “didn’t work” and ultimately threatened a child’s right to be deaf—a cultural right (
38). Over time, combining considerable improvements in technology, indications, and results with the fact that most deaf children are born to two hearing parents who want their child to communicate in an auditory-oral way has made this opposition to cochlear implants lessen. Today, cochlear implants are widely regarded as an accepted and efficacious choice for patients who are deaf or significantly hearing-impaired.
Coding Strategies
The earliest speech-coding strategy used by the House-3M single-channel device employed amplitude modulation of a 16-kHz sinusoid by the bandpass-filtered audio input signal. The incoming signal needed to be compressed to cope with the limited dynamic range for electrical stimulation of the auditory nerve. Even though most of the original temporospectral information of the analog signal was preserved, the signal transfer to the auditory nerve was limited by the maximal firing frequency of the nerve in response to the electrical stimulation at a single site within the cochlea. High synchronization of nerve fibers and the neural refractory period only allowed for frequency transmission up to 1 kHz via pure temporal coding. For higher frequencies, the spectral information could not be sufficiently transferred. While these single-channel devices provided sound awareness and enhancement of lipreading, open-set speech perception was rarely achieved, thus paving the way for the frequency-place coding that is used in all of the multichannel devices on the market today (
39).
The earliest multichannel place-coding devices used bandpass filters to separate frequency bands and compression to reduce the needed acoustic dynamic range in to the electrical stimulation range of the cochlear nerve (˜ 20 dB). However, these devices remained limited by current spread and channel interaction, thereby limiting spectral resolution. Poor spectral resolution probably contributed to significant spectral mismatch between the frequency allocation to a given electrode and the perceptual consequences of stimulation. Attempts to improve this spectral mismatch have included moving the site of electrode activation closer to the neural elements by creating modiolar conforming arrays, intraneural electrodes, light stimulation, or neurotrophic factors (
40,
41,
42). Another approach has been to use multiple electrode activation at differing times (i.e., temporal coding) to avoid channel interactions, thereby improving stimulation specificity.
Today all of the modern strategies use some variation of pulsatile (on-off), interleaved stimulation of the multiple electrodes within the array in an effort to achieve specific stimulation while avoiding channel interaction, thereby improving frequency selectivity. That is, spatially separate electrodes are activated at different times to account for neural refractory times, current spread, and electrical field interaction. These strategies are commercially known as continuous interleaved sampling (CIS), advanced combination encoder (ACE), Spectral Peak (SPeak), and HiResolution (HiRes) and are individually employed by all three cochlear implant manufacturers, in their products. The primary differences among them are rate of stimulation, number of channels, the relation between the number of filters and the number of electrodes activated, as well as the details of how each channel’s envelope is extracted and how much of the acoustic signals temporal information is preserved. SPeak and ACE attempt to emphasize spectral peaks in the signal by selecting a subset of filter outputs with higher energy levels. HiRes augments the envelope with temporal information by allowing higher-frequency components through the envelope detector. Although some patients can clearly perform better with one strategy than another, average speech scores across populations of patients do not clearly demonstrate superiority of a particular strategy (
43).
Patient Selection
Absolute contraindications to cochlear implantation include those patients without either a cochlea (Michel aplasia) or a cochlear nerve. Relative contraindication might include those patients with active middle ear disease, severe anesthetic risk, and too much residual hearing or those who are unwilling to tolerate the surgical risks.
There are clearly patients who may require adjustment of expectations through more detailed counseling prior to considering surgery as their prognosis for attaining highlevel open-set speech perception might be more limited. Patients at risk for lower level performance might include those with either anatomic disorders that can adversely affect the electrode-neural interface or central nervous system problems that result in impaired auditory processing or cognitive impairment. Examples of the former category include (a) cochlear disorders such as extensive cochlear obstruction from previous meningitis, otosclerosis, or severe inner ear malformation and (b) cochlear nerve disorders such as cochlear nerve deficiency (CND) or tumor of the 8th cranial nerve (i.e., vestibular schwannoma). Central nervous system disorders that might adversely affect normal brain function and thus performance with the implant could include previous stroke, degenerative diseases such as multiple sclerosis, dementia, tumors, or infections. With these caveats in mind, it is critical to recognize that setting appropriate expectations for potentially lower levels of performance are not contraindications to surgery. Rather, restoration of audition through cochlear implantation can result in dramatic improvements in quality of life and daily function for these individuals but should be undertaken following appropriate counseling of expectations. In certain instances, assessment of psychological factors can be useful to better understand a patient’s condition prior to considering implantation.
In the US, at least three different bodies have separate guidelines for establishing what constitutes candidacy criteria for cochlear implantation in adults and children. These include the US FDA who recognizes the results of appropriate safety and efficacy clinical trials carried out by the device manufacturers to achieve specific labeling for their product(s), insurance companies, and the Centers for Medicare and Medicaid Services. In general, adults (≥18 years) are required to have a moderate-to-profound hearing loss without medical contraindications and the desire to be a part of the hearing world. The results for aided speech perception testing vary by manufacturer and payer and are listed in
Table 163.2. Prelingual children can be as young as 12 months of age, gain limited benefit from amplification, while being enrolled in an early intervention program. Older children with some degree of speech perception should also have specific speech perception testing results that are obtained while wearing appropriate amplification (
Table 163.3). It is clear that candidacy criteria continue to evolve as technology improves and new
medical discoveries uncover broadened indications. The reader should always seek up-to-date, detailed information on a case-by-case basis prior to considering candidacy.
For young children, it remains critically important to recognize the importance of early intervention in the form of appropriately fit amplification and/or cochlear implantation in the development of speech perception, speech production, and spoken language (
5). While these studies clearly document the fact that earlier is better, this must be balanced against the reality that cochlear implants, in their current format, usually result in a total loss or substantial decrement in native acoustic hearing abilities in the operated ear. While electrophysiologic methods are sufficient for estimating the degree of residual hearing for the purposes of fitting amplification, there remain some patients who have no responses on auditory brainstem response (ABR) and auditory steady state response testing who can gain enough benefit from amplification for the purposes of speech and language development (
56,
57). With this in mind, it remains important to defer cochlear implantation until the age where developmentally appropriate behavioral audiometric results are valid (usually 7 to 9 months of age for visual reinforcement audiometry). One clear indication for very early implantation might include a history of meningitis with ongoing ossification. Irrespective of the type of intervention, early diagnostic and therapeutic auditorybased speech therapy is critical in assessing progress in spoken language development. This single factor remains of paramount importance in deciding whether to proceed with implantation in the very young, hearing-impaired child.
Temporal Bone Imaging in Cochlear Implantation
Diagnostic imaging of the temporal bone and brain is critical in patients considering cochlear implantation to (a) identify the etiology of hearing loss, (b) define surgical anatomy and the potential for complications or sequelae from surgery, and (c) identify factors that negatively impact upon prognosis for performance using the device. Even in the setting of a normal history and physical examination, it is not unusual for routine preoperative temporal bone imaging to identify conditions such as developmental labyrinthine anomalies, CND, otosclerosis, and inflammation, fibrosis, and ossification of the inner ear.
Imaging protocols for both high-resolution magnetic resonance imaging (MRI) or computed tomography (CT) of the temporal bones and brain have been described previously (
58,
59,
60,
61,
62). From an imaging perspective, developmental labyrinthine anomalies are well-defined using either MRI or CT. However, MRI is superior to CT for identifying CND in the setting of a normal dimension internal auditory canal, inner ear luminal obstruction when ossification is lacking (i.e., fibrosis), and central nervous system disorders such as vestibular schwannoma, demyelinating disease, and stroke. Conversely, CT is better for determining
the degree of labyrinthine obstruction that is due to ossification (i.e., calcified), cochlear nerve aperture patency in the setting of a small internal auditory canal, and facial nerve location within the temporal bone (
58,
59,
61,
62,
63). Thus, we prefer MRI to CT for patients with an uncomplicated history as this modality can identify a patent cochlea and normal cochlear nerve while avoiding radiation exposure. CT is used selectively when cochlear obstruction is present, the internal auditory canal is small, the semicircular canals are absent resulting in facial nerve anomaly, and when temporal bone pathology is present. MRI is avoided in patients with pacemakers or severe claustrophobia.
Labyrinthine anomalies have been described in detail previously (
64,
65). In these cases, cochlear structure can be normal, absent (Michel malformation), cystic, incompletely partitioned (IP), or hypoplastic. Semicircular canals can be either aplastic or dysplastic. The vestibular aqueduct or endolymphatic duct and sac can be enlarged (EVA) in isolation or in association with cochlear IP. This group is referred to as the
IP-EVA spectrum. If all cochlear partitioning is absent but there remains differentiation in to cochlear and vestibular labyrinthine compartments, this is termed as cystic cochleovestibular malformation (CCVA). Common cavity (CC) is defined as no internal differentiation of the labyrinth into either a vestibular or cochlear compartment. Examples are shown in
Figure 163.1.
CND refers to a small or absent auditory nerve on high-resolution imaging and has been identified among patients with normal inner ear morphology as well as those with inner ear malformations, narrow internal auditory canals, and/or electrophysiologic characteristics of auditory neuropathy spectrum disorder (ANSD) (
60,
66,
67,
68).
Figure 163.2 shows examples of CND in the setting of a normal and small internal auditory canal.
Cochlear obstruction can occur following previous cochlear inflammation in the setting of meningitis and
immune-mediated inner ear diseases such as Cogan syndrome or following severe cases of otitis media. Secondary ossification within the fibrotic inner ear lumen can subsequently result in solid obstruction. In such cases, neural viability can be suspect. Examples of cochlear inflammation, obstruction, and ossification are seen on both MRI and CT (
Fig. 163.3).