Fig. 14.1
Tonotopic organization in the cochlea and the cochlear nucleus. Black, dark gray, and light gray lines represent high, middle, and low frequencies, respectively. In the cochlea, a simple linear organization of the tonotopic map is observed, while the tonotopy in the cochlear nucleus has three-dimensional organization in which the characteristic frequency also changes in vertical direction to the surface of the brainstem
14.3 History of Cochlear Implants
According to written reports, attempts to treat deafness by electrical stimulation began in the eighteenth century. Benjamin Wilson and Alessandro Volta both used extra-auricular electrical stimulation to produce auditory sensation (in 1748 and 1800, respectively). The primitive electrical stimulation elicited disagreeable shocks in the head with some “auditory” sensation [5]. From 1940 to 1950, several clinical trials demonstrated that electrical stimulation at the promontory in the middle ear and direct stimulation of the auditory nerve provided some auditory sensation [5, 6]. André Djourno and Charles Eyriès, pioneers in this field, implanted an electronic neuronal stimulator in a deaf patient in 1957. Using the implanted device, the patient could discriminate between low and high frequencies and differentiate the intensity of the stimulation [1]. Djourno and Eyriès’ device stimulated the residual stump of cranial nerve VIII following temporal bone resection in patients with large bilateral cholesteatomas [7]. This patient could not understand speech, but the results of this study inspired the clinicians and researchers who followed, and their work is usually referred to as the first cochlear implantation [1]. In 1961, William House and John Doyle performed single-channel cochlear implantation in two deaf patients by inserting a gold wire electrode into the scala tympani. Electrical stimulation of this single-channel electrode provided some auditory sensation to these patients [1]. Thereafter, a single-channel CI began to be implanted in deaf patients, mainly in the United States, and data supporting the safety and effectiveness of the single-channel CI began to accumulate, even though speech discrimination without lip reading was difficult for patients with the device. Food and Drug Administration (FDA) approval for CIs was obtained in 1985. In 1984, Clark developed the popular multichannel implant with bipolar stimuli. The first pediatric cochlear implantation was performed by House in 1987 [1].
14.4 Historical Changes in Candidacy for Cochlear Implantation
During the early years of cochlear implantation, postlingually deaf adults with normal cochlear anatomy received CIs, but the frequency of cochlear implantation in congenitally deaf children, children with inner ear and/or internal auditory canal (IAC) malformations, and children with multiple disabilities gradually increased as clinical reports demonstrating the safety and efficacy of CIs accumulated. In this section, we focus on historical changes and recent trends in candidacy for cochlear implantation, as well as CI outcomes in challenging candidates, as we consider the effectiveness and limitations of CIs.
14.4.1 Age at Implantation
As mentioned above, the first pediatric CI surgery was performed in 1987, a quarter-century after the first adult CI surgery in 1961. The results of pediatric cochlear implantation demonstrated that CI-mediated auditory stimuli promoted speech and language development, even in prelingually deaf children who would have been obliged to use visual languages (such as sign language) without a CI. Several studies revealed that a younger age at implantation resulted in better language development [8]. Functional brain imaging studies demonstrated that visual stimuli (lip reading) increased regional blood flow and glucose metabolism in the auditory association area in deaf patients, an effect that was not observed in subjects with normal hearing [9, 10]. Interestingly, the congenitally deaf children who had used their CI for a long-term with appropriate auditory-verbal rehabilitation showed cortical activities similar to those of control subjects [9, 10]. These data suggest that CI-mediated auditory input prevents abnormal cross-modal reorganization in the temporal lobe in deaf children. Sharma et al. demonstrated that P1 latency, which is the indicator of maturation of the auditory system, was significantly shortened in children who underwent cochlear implantation at an early age when compared with children who underwent implantation after age 3.5 years [11]. Previous research identified the critical period (or sensitive period) in other primary sensory cortices such as a visual and somatosensory system [12], and Sharma’s study, using an electrophysiological approach in deaf children with CIs, clearly demonstrated that a critical period also exists in the auditory system. Based on results from these studies, age at cochlear implantation is becoming lower in the congenitally deaf population. In 1990, the FDA lowered the approved age for implantation to 2 years, then to 18 months in 1998, and, finally, to 12 months in 2000. From the viewpoint of mimicking the normal auditory experience during infancy, earlier implantation might be better to achieve sufficient auditory neural development. However, general anesthesia and operations in children less than 6 months old require special precautions [13], and precise evaluation of hearing level is usually difficult in infants. Therefore, cochlear implantation between ages 6 and 12 months might be most practical.
14.4.2 Bilateral Cochlear Implantation
Binaural hearing provides head shadow, squelch, and summation effects to improve speech discrimination in noise and is essential for sound localization to detect interaural time differences (ITD) and interaural level differences (ILD) [14]. Early in the history of cochlear implantation, improvement of speech discrimination in quiet was the main goal for implanted patients, but in recent years, the situation has changed dramatically. Significantly more congenitally deaf children with a CI attend mainstream kindergartens and schools, and these children have to listen to and understand speech sounds in noise. The single-sided CI user can use the head shadow effect, but this is usually not sufficient to facilitate learning at the same rate as students with normal hearing in noisy classroom conditions. Many studies have demonstrated that patients with bilateral CIs have better speech discrimination scores and sound localization ability in noise than patients with unilateral CIs, even though acquisition of these abilities seemed to depend on the age at second implantation and the time gap between the first and second operations [14]. The European Bilateral Pediatric Cochlear Implant Forum Consensus Statement, published in 2012, recommended that a deaf infant or child should receive bilateral CIs simultaneously as soon as possible after definitive diagnosis of deafness to permit optimal auditory development [15]. Even though some countries have not established an environment supporting bilateral cochlear implantation due to lack of financial support and health insurance coverage, simultaneous bilateral cochlear implantation at less than 1 year of age is the global trend in treatment for congenital deafness.
14.4.3 Inner Ear and Internal Auditory Canal Malformations
Inner ear malformations account for about 20–30 % of congenital severe and profound hearing loss. Although identification of a cochlear malformation was once considered a contraindication for cochlear implantation [2] due to the high incidence of cerebrospinal fluid (CSF) gushers, facial nerve abnormalities, and poor CI outcomes, many children with an inner ear malformation currently undergo cochlear implantation [16, 17]. In 1987, Jackler et al. were the first to propose a classification system for inner ear malformations based on the hypothesis that termination of ordinary inner ear development leads to inner ear malformations. According to Jackler’s classification, inner ear malformations are categorized into Michel deformity (labyrinth aplasia), cochlear aplasia, common cavity deformity (CC), cochlear hypoplasia, and incomplete partition, corresponding to each stage of inner ear development [18]. Later, Sennaroglu developed Jackler’s classification system and further divided cochlear hypoplasia and incomplete partition into cochlear hypoplasia (CH) types I–III and incomplete partition (IP) types I–III, respectively [17, 19]. Sennaroglu’s classification is used in many populations because of its effectiveness to predict surgical problems during implantation and postoperative CI outcomes [17, 20]. For example, patients with Michel deformity are not candidates for cochlear implantation because of no space for electrode insertion. IP-I, IP-III, CH-II, and a part of CC are susceptible to CSF gusher due to a communication between the IAC and the malformed inner ear, which is highly associated with the absence of the modiolus. On the other hand, cochlear implantation in patients with IP-II and large vestibular aqueduct syndrome is usually associated with good hearing outcomes. In addition to inner ear malformations, IAC malformations are also important because small diameters and stenosis of the IAC or bony cochlear nerve canal (BCNC) are highly associated with aplasia or hypoplasia of the cochlear branch of the vestibulocochlear nerve, also known as cochlear nerve deficiency (CND), which has a negative impact on CI outcomes [21]. When the diameter of the IAC or BCNC is smaller than 2 or 1.5 mm, respectively, it is diagnosed as narrow IAC (NIAC) [17] or hypoplasia of BCNC (HBCNC) [22]. In these groups, CI-mediated auditory response is usually poor because CND is associated with an insufficient number of SGNs, which are the target neurons for CI-mediated electrical stimulation. Therefore, patients with NIAC or HBCNC, in addition to those with Michel deformity and cochlear aplasia, may be candidates for an auditory brainstem implant (ABI), which is further discussed in Chap. 19.