Cochlear Implants: Introduction
An absence or disturbance of cochlear hair cells causes most cases of deafness. This defect in normal cochlear function, specifically, in the transduction of a mechanical acoustic signal into auditory nerve synaptic activity, represents a broken link in the delicate chain that constitutes the human sense of hearing. Cochlear implants afford an artificial means to bypass this disrupted link via direct electric stimulation of auditory nerve fibers.
Although current technological and scientific boundaries preclude the artificial transduction of sound by using the exact native cochlear patterns of synaptic activity at the level of each individual residual auditory nerve fiber, knowledge of these native patterns has aided the development of cochlear implants by allowing the processing of speech into novel synthetic electronic codes that contain the key features of spoken sound. By using these codes to systematically regulate the firing of intracochlear electrodes, it is possible to convey the timing, frequency, and intensity of sound. Cochlear implants have progressively evolved with increasing complexity and elegance from an experimental concept to a proven tool used in the management of patients with sensorineural hearing loss (SNHL). Worldwide, the number of implants is rapidly increasing. As with many other technology-driven medical treatment modalities, recent innovations in microcircuitry and computer science are continuing to drive the performance profiles of cochlear implants to new heights.
Cochlear Implant Systems Hardware
Currently, three separate corporations manufacture multichannel implant systems that are commercially available and approved by the FDA for use in both adults and children. Although expensive, multiple studies have demonstrated that the cost-utility of cochlear implantation is excellent and that it compares well with other common medical interventions.
All modern implant systems function by the use of the same basic components including a microphone, a speech processor, and an implanted receiver–stimulator (Figure 68–1).
Figure 68–1.
Schematic depiction of how cochlear implant systems operate. 1. Sound is detected by an external microphone. 2. This signal is directed to an external sound processor. 3. Once processed, a digital electronic code is sent by a transmitting coil situated over the receiver–stimulator via radiofrequency through the skin. 4. The receiver–stimulator delivers electronic impulses to electrodes on a coil located within the cochlea according to whichever strategy is being used by the processor. 5. Electrodes electrically stimulate spiral ganglion cells and auditory nerve axons.
Sound is first detected by a microphone (usually worn on the ear) and converted into an analog electrical signal. This signal is then sent to an external processor where, according to one of a number of different processing strategies, it is transformed into an electronic code. This code, a digital signal at this point, is transmitted via radiofrequency through the skin by a transmitting coil that is held externally over the receiver–stimulator by a magnet. Ultimately, this code is translated by the receiver–stimulator into rapid electrical impulses distributed to electrodes on an array implanted within the cochlea (Figures 68–2, 68-3, 68-4, and Figure 68–5).
Current generations of speech processors are smaller in size and are continuously being redesigned to improve functionality, comfort, and cosmesis. Most adults and older children wear ear-level processors (behind-the-ear processors). Processors worn on the belt, clipped to clothing, or incorporated into small packs (body-worn processors) are still preferred for very young children as well as some adults (Figures 68–6, 68-7, and Figure 68–8). Entirely implantable devices are under development.
Speech Processing
The literature uses the term speech processing, but this component may be more aptly termed sound processing, because the manipulations are not limited to speech only. In fact, a greater focus is now on enhancing the quality of all sound and specifically an effort to improve music appreciation. No matter what processing strategy is used, part of this process must include both amplification (ie, gain control) and compression. Since the deaf ear responds to electrical stimulation with a dynamic response in the range of 10–25 dB, processing must compress the signal to fit within this narrow range. How to best convert sound into an electrical signal is being actively investigated.
Some of the earliest multichannel strategies, known as analog filter bank strategies (continuous analog), channel speech through multiple frequency-dependent filters and deliver distinct outputs of sinusoidal analog signals directly to separate electrodes. Other so-called feature extraction strategies (F0, F1 and F0, F1, F2) work by rapidly drawing out frequency-based details that are considered to be the most essential in speech recognition; these include both fundamental frequency and vowel formants. Disbursement of this key information is accomplished through pulsatile signals having rates synchronous to the fundamental frequency and a tonotopic order that is derived from formants.
Modern adaptations of direct analog strategies have sought to overcome the problem of channel interaction or “spillover” that readily occurs when adjacent electrodes are simultaneously stimulated with continuous analog signals. The result of such efforts has led to the development of a strategy, continuous interleaved sampling, that delivers very rapid noncontinuous pulsatile stimulation over multiple filtered channels.
In addition, newer high-rate spectral analysis strategies, SPEAK (spectral peak) and ACE (advanced combination encoders), for example, determine 6–10 spectral maxima for each input signal. Other newer approaches such as “n-of-m” strategies (n = filters, m = channels) are constantly undergoing innovation and refinement with the goal of combining the theoretical advantages of each type of system while incorporating ever-progressing new technologies.
In speech processing, consideration is given to the incoming acoustic signal; in addition, actual neural responses (neural response telemetry, neural response imaging, or auditory nerve response telemetry) to stimulation may be measured and accounted for in the formulation of a neural stimulation scheme. By measuring evoked action potentials from specific electrodes, it is possible to predict the needed amplitudes for each channel of the speech processor. Some audiologists find this information particularly helpful when programming very young children.
Selection & Evaluation of Patients
Candidacy for cochlear implantation relies heavily on the audiological evaluation. Although the audiometric criteria continue to change, the goal remains the same—identify those patients in whom the implant is likely to provide better hearing. Because of device improvements, the ability to hear with an implant has dramatically improved over time. Therefore, the accepted audiometric criteria for implantation have expanded to include patients with more residual hearing. Some of these patients, having quite useful low-tone hearing in the setting of middle- and high-frequency deficits, may now be classified as having “partial deafness.” Hybrid or short electrode devices have been developed to allow the preservation of the native low-frequency hearing, thus the patient would combine electric (cochlear implant) and acoustic (hearing aid) hearing in the same ear.
For adults in the United States, candidacy is based on sentence recognition test scores (eg, Hearing-in-Noise Test or Arizona Biomedical Sentences) with properly fitted hearing aids. Scores of 60% or less are generally needed to establish candidacy.
In children undergoing an evaluation for cochlear implantation, it is first necessary to establish a hearing threshold. This may include otoacoustic emissions, auditory brainstem response testing, auditory steady-state responses, and behavioral testing. A hearing aid trial can then be initiated and speech and language development assessed. Input is elicited from audiologists, parents, teachers, and speech and language pathologists. The cochlear implant team then assimilates the information and a determination is made regarding the child’s progress with amplification and suitability for implantation.
A thorough otological history and physical examination are obtained as part of the preimplantation assessment and include an investigation into the etiology of the hearing loss. (For a thorough discussion of the evaluation of SNHL in adults and children, see Chapter 52, Sensorineural Hearing Loss.)
In pediatric patients, it is important to ascertain whether there is a history of recurrent ear infections, pressure equalization (PE) tube placement, or other otological surgeries. Patients with acute otitis media should be both treated with appropriate conventional antibiotics and demonstrated to be clear of infection before proceeding with surgery. For patients with a chronic middle ear effusion or recurrent acute otitis media, myringotomy with PE tube placement may be considered. Cochlear implants can safely coexist with PE tubes, although, ideally, patients would have an intact tympanic membrane at the time of surgery.