Device Programming


Device Programming

William H. Shapiro

The ultimate goal of device programming is to adjust a device so that it can effectively convert acoustic input into a usable electrical signal for each electrode stimulated. The more accurate this conversion process, the greater the potential for the patient to achieve the ultimate goal of open-set speech perception. There are basic psychophysical measures that the programming audiologist must obtain independent of the device used to achieve this goal. The method used and the degree of difficulty in obtaining these psychophysical measures can vary considerably depending on several factors (e.g., patient chronological age, mental status, length of deafness, other handicapping conditions, etc.). This chapter focuses on device programming in general, programming techniques specific to children, a brief overview of the programming parameters of the three most widely used, commercially available multichannel cochlear implant systems—Clarion device (Advanced Bionics Corporation, Valencia, CA), Med-El Combi 40+(Innsbruck, Austria), and Nucleus C124RCS (Cochlear Corporation, Sydney, Australia)—and how these various parameter manipulations can affect perception and, finally, objective programming techniques.


Device programming typically begins approximately 2 to 4 weeks following cochlear implantation. Prior to the initial stimulation, it can be quite helpful for the audiologist to obtain a copy of the operative or intraoperative monitoring report (discussed later in the chapter). This can provide useful information regarding the number and the integrity of electrodes inserted intracochlearly. Avoidance of the stimulation of extracochlear electrodes during the initial stages of programming can prevent nonauditory side effects, which can delay the smooth transition to the use of the device, especially in children.

Traditionally, regardless of the device, two measures need to be obtained to create a program: electrical thresholds (T levels) and most comfortable levels (C/M levels). More recently, programming for the Advanced Bionics and Med-El devices have not necessitated acquiring of T levels, as these levels have been obtained through automatic software manipulation (discussed later in the chapter). Electrical threshold (minimal stimulation level), although defined differently by different cochlear implant manufacturers, is typically the softest sound that can be reliably identified by the patient 100% of the time. Electrical comfort level (maximum stimulation level) is defined as the loudest sound that can be listened to comfortably for a sustained period of time. These two measures must be obtained in order for a channel to be activated. The ease with which the programming audiologist can obtain these values and the type of conditioned response observed vary with the patient’s age, among other variables. For children, many of the techniques used to obtain these responses are similar to those used by pediatric audiologists.

Because threshold and comfort levels can be affected by the speech-encoding strategy used, it is important to set the speech encoding strategy prior to collecting T-and C-level data. Encoding strategies can be defined as the method by which a given implant translates the incoming acoustic signal into patterns of electrical pulses, which in turn, stimulate the existing auditory nerve fibers. Strategies, in general, provide the listener with either salient cues regarding spectral or envelope information (SPEAK) or temporal information [continuous interleaved sampling (CIS)/high resolution (HiResolution)]. Even though the number of encoding strategies has increased over the years, as have their methods of implementation, the clinician is still required to obtain the same basic measures (thresholds and comfort levels) on each electrode that will eventually be activated.

Historically, another important parameter that needed to be determined prior to obtaining T and C levels was the selection of stimulation mode. Stimulation mode refers to the electrical current flow; that is, the location of the indifferent (reference) electrode, relative to the active (stimulated) electrode. In monopolar refers to a remote ground located outside of the cochlea, whereas bipolar refers to all stimulation occurring within the cochlea. The Advanced Bionics and Nucleus devices can be programmed in a monopolar or bipolar mode, and the Med-El can be programmed in a monopolar mode only. Typically, a wider stimulation mode (monopolar) results in lower threshold values due to a larger physical separation of active and ground electrode, which may, in turn, extend the battery life for behind-the-ear (BTE) speech processor users. Additionally, the use of mono-polar stimulation allows for a more consistent threshold value for adjacent electrodes due to the wider spread of current. This consistency throughout the array can allow for interpolation of threshold and comfort level values of adjacent electrodes not obtained through actual behavioral testing. This can be especially beneficial where time is critical, as in programming children. Initial concerns that monopolar stimulation would not be place specific proved unfounded. Research suggests that patients in monopolar stimulation can pitch rank and perceive a monotonic decrease in pitch as the stimulating electrode is moved from the base to the apex of the cochlea (American Speech-Language Hearing Association, 2004). For these reasons, over the past few years, monopolar stimulation has been the desired stimulation mode.

Once the (T and C/M levels) measures have been obtained, loudness balancing of the adjacent electrodes at 100% and 50% of the dynamic range is often performed. Equal loudness across the electrode array has been suggested to be important for optimal speech perception and production. Loudness balancing in an adult or patient with an auditory memory is not a difficult task; however, obtaining loudness information in a congenially hearing impaired 2-year-old can be a truly challenging task. Studies have demonstrated that despite their inability to loudness balance, young children can obtain open-set discrimination; therefore, loudness balancing, at least in the very young, should not be considered a mandatory step in the programming process.

After the T and C/M levels have been obtained and loudness balancing achieved, a program is created. The program provides patients with their first exposure to live speech stimuli. Based on the individual’s initial reaction to the stimuli, various parameter manipulations (specific device programming procedure section) can be instituted to achieve a comfortable and effective signal.

Device Programming in Children


The goal of preprogramming is to prepare the child for the initial stimulation. This can be accomplished by training auditory concepts in a child who may have had limited exposure to sound. Responsibility for training these concepts usually lies within the domain of the speech therapist working with the child, the audiologist evaluating the child, and, most importantly, the parent. The fitting of an appropriate sensory aid is integral to the preprogramming process. The type of sensory aid can range from a high-powered postauricular aid to an frequency modulated (FM) system. Many large cochlear implant centers have instituted loaner hearing aid/FM programs both to assist in determining candidacy and to fill the gap between the time of diagnosis and surgery. Cross-modality training, including vibrotactile stimulation, to assist in the conditioning of a child to respond to auditory stimulation is sometimes used. This method can be helpful when a child has either a total hearing loss or limited exposure to auditory stimuli. The amount of preprogramming varies depending on the auditory needs of the specific child.

Intraoperative Monitoring

Intraoperative monitoring consists of a battery tests typically performed by an audiologist in the operating room. These tests include, but are not limited to, impedance telemetry, electrical stapedial reflex thresholds, and neural response telemetry/imaging. Although monitoring can be time-consuming and costly, it provides the implant team with important information regarding the implanted device. Specifically, intraoperative monitoring provides information about the electrical output of the device and the patient’s auditory system response to stimulation. Additionally, monitoring can provide the clinician with preliminary psycho-physical measures as an adjunct to programming at initial stimulation. Finally, providing the family member with information regarding device function immediately postsurgery can be a powerful tool in cultivating the relationship between the professional and the patient.

Through Intranet connections, it is now possible for an audiologist to monitor implant surgery from remote locations. An individual such as a resident, nurse, or fellow is identified in the operating room as responsible for setting up the laptop and all connections to the Intranet. When the surgeon is ready for device monitoring the audiologist is contacted over a “landline” and “takes over” the computer in the operating room via the Intranet and commences with the monitoring. The use of a speakerphone can facilitate communication between the operating room and the audiologist at the remote location.

Initial Stimulation

Establishing a comfortable programming environment is critical to the success of programming during the initial stages, especially with children (Shapiro and Waltzman, 1998). The room should be equipped with appropriately sized chairs and tables. For children, the presence of two audiologists is optimal: one to condition the child and the other to operate the computer. It is important to have a variety of toys on hand to help condition the children. Toys should be changed often because measurements can become unreliable as a child adapts to a particular toy. Parents should be encouraged to bring the child’s favorite toy to the programming session.

The initial stimulation can be a time of anticipation and apprehension for both the young child and the accompanying parents. It is of utmost importance that the programming audiologist conveys a sense of calm to the parents and the child. Whenever possible, it is recommended that the child separate from the parents during the programming sessions so that a rapport develops between the child and the audiologist.

The videotaping of program sessions can provide additional information regarding the child’s response style, which can then be used to modify the conditioning methods and the responses obtained and to document progress. Basic as well as advanced programming techniques are essential to the implementation of a viable children’s cochlear implant program. Although experience in programming adults should be a prerequisite, there is no substitute for the experience gained by programming large numbers of children.

The initial stimulation is typically scheduled over a 2-to 3-day period, consisting of approximately 2 or 3 hours each day. It is important to note that the length of the initial stimulation phase can vary depending on the child. The time spent during these visits can be divided into two main categories: (1) psychophysics—obtaining T and C levels (i.e., device tuning) with subsequent modifications; and (2) device orientation for child and parents. Historically, with a young child, half the electrode array of the Nucleus 24-channel device was programmed on the first day with the remaining electrodes on the second day. The third day, if required, may be used for programming modifications (Cochlear Corporation, 1996). The Clarion and Med-El devices, which have fewer electrodes, required a little less time. With the greater use of objective testing techniques and streamlined fitting procedures (explained later in this chapter), however, device programming time has been reduced significantly without a compromise in performance for all devices. As a result of these procedures, the full array can typically be programmed at the initial visit. Modifications can consist of creating additional programs for the speech processor. These programs, at least during the initial period of stimulation, might be devoted to increases in C levels (i.e., each successive program having increased power). When it is believed that true C levels have been approximated during the initial stimulation phase, other modifications, which will be discussed later in this chapter, are attempted. The use of multiple programs can be implemented with the Nucleus 24RCS, Med-El Combi-40+, and Clarion device.

Obtaining T and C levels on young children can be both challenging and difficult. One cannot assume that the T and C levels obtained during the initial stages of programming are ideal. During the first few months, as the child adjusts to this novel acoustic signal, the comfort levels typically increase (Henkin et al, 2003). Alternatively, it is important to note that overly ambitious maximum levels may introduce distortion to an otherwise clear signal. Second, physiologic changes occur that are reflected in changes in the T during the initial few months of stimulation. Typically, the more accurate the T and C levels, the better the performance with the program.

The initial response of the pediatric patient to electrical stimuli can differ greatly from the response to acoustic stimuli. Reactions can range from no response to crying. Although the older child can be conditioned to raise his or her hand or use play audiometry, conditioned orientation reflex audiometry, behavioral observation audiometry, or visual reinforcement audiometry needs to be used to obtain T on the younger child. Obtaining C levels in a young child is more subjective and can require greater clinical acumen on the part of the audiologist. Commonly the child will not respond until the electrical current is too high, at which point he or she will react adversely. During the initial phases of stimulation, setting conservative/ dynamic range (i.e., reduced spread between T and C levels) is the preferred approach so as not to frighten the child with a loud sound.

During the initial stages of programming, it is important that parents understand what should and should not be expected of the child. It is not uncommon for a child who previously had some auditory detection and discrimination to have difficulty maintaining the same level of performance during the early poststimulation phase. Parents need to be prepared for this scenario and to be reassured that over time the child’s performance will improve.

Although the initial pediatric programming techniques vary among audiologists, generally accepted procedures are followed. A gross estimate of the child’s behavioral response is obtained via standardized procedures at an apical electrode, quickly skipping three or four electrodes toward a more basal area electrode, and continuing this process until the whole electrode array has been stimulated. Once the gross estimates of thresholds have been determined, more definitive measures should be obtained. While the young child is engaged in some activity, comfort levels are set, either psychophysically (tones or bursts) or through live voice stimulation for each of the electrodes. During the setting of C levels, the audiologists are responsible for closely monitoring the child for any adverse reactions. Underestimating C levels initially can allow for a smooth transition to the implant by reducing loud auditory sensations.

Daily Care, Maintenance, and Troubleshooting

A portion of time during the initial stimulation is devoted to the daily care and maintenance of the device and understanding how to troubleshoot the external equipment. Parents and patients should be encouraged to ask questions and read the user manual supplied by the manufacturer. Of particular importance during the initial phases of programming is a daily check of the skin under the headset magnet to assess the skin integrity. It is common for parents in particular to tighten the magnet sufficiently to ensure that the coil never falls off. In the most extreme of circumstances, this can lead to skin necrosis under the magnet. Parents and adult patients should be advised that during the first few weeks it is not uncommon for the headset to fall off and should be reassured that over time the expectation is that the child (adult) during the “bonding” process with the device, will learn to put the headset back on by him/ herself and that the constant magnetic attraction will eventually lead to a more secure coupling.

Although the speech processor, headset and cables are robust, breakdowns can occur. For adult cochlear implant users having spare parts such as batteries, cables, and possibly a headset, should be adequate to solve most of the hardware problems. Parents should maintain an even greater supply of parts for their child and provide the school with some as well. Numerous counseling sessions regarding device troubleshooting offered by the cochlear implant center in conjunction with comprehensive troubleshooting guidelines provided by the manufacturers is usually sufficient for problem solving; however, if all attempts to resolve the problem fail, the implant center should be contacted immediately. Regularly scheduled in-servicing of the school personnel by the child’s cochlear implant center is crucial to maintain a high functional level. The failure rate of the internal portion of the cochlear implant is low; however, should a failure occur, reimplantation should be performed as quickly as possible.

Electrostatic discharge (ESD) can pose a problem for cochlear implant users. The transfer of static electricity to the implant system can corrupt or eliminate a stored program in the speech processor and, on rare occasions, be responsible for the breakdown of the internal electronics package. Device manufacturers suggest removal of the externally worn parts prior to exposure to high levels of static electricity, such as plastic slides. It should be noted, however, that with the advent of more robust electronics, ESD has become a less significant problem.

♦ Device-Specific Programming

One of the more powerful tools in current devices is the ability to provide the user with multiple-program, multiple-strategy options. Although the multiple-program option should not serve to lengthen the time between programming visits, it can provide a more flexible, efficient transition during the initial stimulation period. More importantly, the multistrategy option allows clinicians and researchers to determine which particular speech-coding strategy may be optimal for an individual patient by allowing the patient access to these different strategies in different programs. Because time is necessary to determine the best strategy for a particular individual, it is advisable to allow a significant and equal amount of time with each strategy.

From an historical perspective, the Clarion multistrategy cochlear implant with the CI ICS and the body-worn processor was able to provide the patient with two speech-coding strategies: continuous interleaved sampling (CIS) or simultaneous analog stimulation. These two strategies represented significantly different approaches to decoding and encoding the incoming electrical signal. This original device (versions 1.0 and 1.2) consisted of 16 intracochlear electrodes arranged in eight closely spaced, independent parallel output circuits, oriented radially. Over time, further changes included the enhanced bipolar stimulation pattern, which stimulated the medial electrode in one pair relative to the lateral electrode in the next most apical electrical pair. The wider electrode spacing resulted in lower T and maximum C levels with a maximum of seven independent stimulation sites within the cochlea. In an attempt to further reduce current requirements, a Silastic positioner was used to “push” the electrode contacts closer to the modiolus. In October 2002, the Clarion device was voluntarily removed from the market due to concerns regarding an increased rate of bacterial meningitis in this population. The CII ICS was then introduced and referred to as the Bionic Ear. This version had 16 independent output circuits that could stimulate each of the 16 electrode contacts simultaneously, nonsimultaneously, or in various combinations. The newest internal device to be introduced, the 90K, is the first Advanced Bionics device that does not use a ceramic case to house the electronic components; instead a Silastic type material allows for the surgical removal of the internal magnet without disturbing the electrode array in the cochlea. This device, as well as the CII, is used in conjunction with the SoundWave software platform (discussed later in the chapter).

The Advanced Bionics device, featuring continuous bidirectional telemetry, allows electrode impedances to be measured during each fitting, thus enabling the clinician to determine the integrity of each electrode. Electrodes with impedances outside of the normal range (5 to 150 K ohm) are automatically disabled at initiation of programming. This continuous two-way telemetry also allows for an audible alarm to sound when the headset is not linked to the internal receiver-stimulator. The T and most C levels prior to Soundwave software were determined channel-by-channel using pulsatile (CIS) or sinusoidal compressed analog (CA) stimulation. Upper-limit comfort levels (ULCLs) are measured using live speech only. Following the loudness balancing procedure, the clinician adjusts the most comfortable level (M) using live voice in real time. Because the M levels are subjective and can vary depending on a variety of factors, they can be adjusted with the volume dial on speech processor. The volume dial shifts the patient’s dynamic electrical range by increasing or decreasing the M level. The upper limit of comfort levels is set by the upper volume range. Volume range is the percentage of the electrical dynamic range above Ms, and the lower volume range is defined as the percentage below Ms. The clinician can set both of these ranges with the volume dial during live-voice speech or manually without stimulation (Advanced Bionics Corporation, 1995). This is especially important in programming children because the audiologist can limit the range in which the volume dial can be manipulated. The sensitivity dial interacts with the volume dial by increasing and decreasing the audio gain. Raising the sensitivity dial will allow an increased perception of loudness because the incoming signal is mapped higher in the patient’s electrical dynamic range.

Various general programming parameters are available to the audiologist, including a gain adjustment palette and input dynamic range (IDR). The gain adjustment consists of an emphasis/de-emphasis tool palette. The emphasis/ de-emphasis tool allows for the adjustment of the sound level of the individual frequency band associated with each channel. This parameter can be helpful in troubleshooting (e.g., if a patient complains of a boomy quality, the lower frequencies can be attenuated). IDR specifies the electrical range of sounds that is transmitted to the patient from the M levels down to T levels. The default IDR is set to capture the full intensity range of speech sounds. Decreasing the IDR will exclude low-level sounds and noise along with the possibility of losing certain speech information (e.g., quiet fricatives). Increasing the IDR can increase the perception of noise. The specific parameters available in a CIS encoding strategy are envelope detection and cutoff frequency, repetitive rate, pulse width, and firing order. Additional suggested parameter manipulations can be used depending on the specific complaints of the patient.

As the complexity of speech coding strategies have evolved from a diffuse broad-band signal found in single channel devices to transmission of significantly more detailed information as found in multichannel devices, so has the device programming time. Clinicians are faced with more programming decisions due to increased number of channels, use of bilateral implants and parametric options. The greater amount of time spent on device programming has reduced the amount of time the clinician has to focus on other tasks, such as counseling and rehabilitation (Garber et al, 2002). In 2003, the Food and Drug Administration (FDA) approved a new speech processing strategy referred to as “Hi-Resolution Sound.” This strategy, along with a new software platform called SoundWave Professional Suite, introduced a new fitting methodology geared toward reducing fitting time while maintaining optimal patient performance. This fitting strategy can be used with the CII or 90K device. The basis of HiResolution programming is the use of “speech bursts” during psychophysical testing. This new feature more accurately represents dynamic real-time input. “Speech bursts” (white noise) are created when complex stimuli are delivered to the processing system, and transmitted through the same filter’s amplitude detections, averaging a logarithm utilized for incoming acoustic sound. This allows for more representative real-time stimulation. With speech bursts, three to four channels are programmed simultaneously. Preliminary studies have demonstrated equivalent performance between single-channel programming and HiResolution programming (Advanced Bionics Corporation, 2003). This reduces the number of loudness adjustments during programming sessions, reduces programming time, and may serve to reduce patient fatigue.

The SoundWave platform employs a different approach for setting threshold levels (minimal audible levels), called “Auto T.” Historically, the obtaining of T levels was not a requirement in all processing strategies, such as the simultaneous analog strategy (SAS) (Kessler, 1999). Testing of HiResolution programs, with T levels and without T [T = 1 cu (clinical units) for all channels, showed equivalent performance. SoundWave auto sets T levels at M/10 based on the average dynamic range (20 dB) of CII Bionic Ear users programmed with HiResolution Sound. This automatic setting can be adjusted as needed globally or for individual channels in real-time stimulation. Loizou et al (2000) demonstrated that a clinician could optimize performance by jointly varying pulse rate and pulse width. Automatic pulse width (APW) and rate adjustment is a parameter in SoundWave that optimizes each patient program to maintain the narrowest pulse width and fastest stimulation rate based on the patient’s own stimulation levels required to achieve adequate loudness growth. With SoundWave, the APW algorithm is able to monitor voltage compliance to limit distortion. The SoundWave software uses two strategies for delivering HiResolution technology: HiResolution-P (two channels simultaneously paired) and HiResolution-S (two channels stimulated sequentially). The APW will continue to maintain the narrowest pulse width and fastest rate possible within each strategy option. The Power Estimator (PoEM) is responsible for managing the radiofrequency (RF) power, obviating the need for the clinician to optimize RF at the end of the session. This allows a program created on the body device platinum speech processor (PSP) to be downloaded directly to the BTE speech processor with automatic RF adjustment.


The Med-El Combi 40+ hardware system consists of an implanted stimulator driving a 12-electrode-pair, intracochlear scala tympani array. It can provide the patient with two separate and distinct speech coding strategies: CIS and number of maxima (n of m) spectral peak speech extractor. As is the case with the Advanced Bionics device, one can obtain voltage differences between an active or nonactive electrode and the remote ground (electrode impedance) via telemetry modes. The coupling mode of the Med-El implant is monopolar, which avoids the high current densities that would be necessary for the fast pulsatile stimulation in bipolar mode. The CIS-PRO processor (body device) features a four-position switch of off and programs 1 through 3, which allows for multiple programs. An LED indicator system (one red, one green) allows five speech processor conditions to be programmed and a volume dial is used for controlling the amplitude of the stimulation signal. An external input connection is used for interfacing approved accessory equipment, such as FM systems. The more widely used Tempo+ BTE employs a three-program switch along with a three-program volume control that allows the user to have as many as nine separate programs. Additionally, a sensitivity control allows for greater flexibility. This device, which is presently the standard device, offers a wide array of wearing configurations that make it ergonomically appropriate for young children. The speech processing strategy continues to be either a sequential or simultaneous processing strategy, that is, a new version of the CIS strategy, called CIS+, in addition to n-of-m.

The typical psychophysical measures that need to be obtained are T and M levels, with C levels loudness balanced. Recently, however, research has focused on the automatic setting of T levels at 10% of the maximum comfort level (MCL) for several reasons, including reduced programming time, reduced audibility of low-level noise, and enhanced peak to valley ratio for formant perception. In 2005, Spahr and Dorman studied 15 subjects implanted with the Combi 40+. The purpose of the study was to determine whether any difference in performance occurred between assigning a threshold of M/10 as compared with the actual setting of minimum stimulation levels to behavioral thresholds. The results showed equivalent scores between both procedures, in quiet, and in noise at low input levels. Parameters other than T and M levels that can be manipulated to achieve a desired auditory effect include the map-law, channel order, volume mode, and bandpass filters. Map-law controls the shape of the amplitude growth function and its compression characteristics and can change in real time. The growth function can be logarithmic or linear. The four log functions provide signal expansion at the lower part of the dynamic range and signal compression at the upper part. The default Combi 40 mapping law coefficient is 500. Channel order specifies the sequence of interleaved stimulation for the CIS processor strategy. The default for an eight-channel CIS strategy is 1–5–2–6–3–7–4–8. This is referred to as staggered sequence. An apex-to-base or base-to-apex sequence is the default for the n-of-m peak extraction strategy. Staggering is not the preferred sequence due to the continuously changing channel selection. The range of the volume control can be adjusted through software manipulation. In addition, two different volume modes can be selected in software for the manual control knob on the speech processor: the IBK (Innsbruck) mode and the RTI (Research Triangle Institute, Research Triangle Park, NC) mode. The RTI mode allows the patient to control stimulation amplitude at threshold, whereas the IBK mode does not (Med-El Corporation, 1997).

Med-El’s present version of software, CI Studio, a Windows-based system, can be used to program all previous generation Med-El devices and differs from the older DOS version in certain aspects. In terms of dynamic range, this software allows for interpolation between channels, and features “drag and drop” functions. The pulse duration can vary per channel and the pulse rate can be changed in steps of 1.7 μs. The maplaw controls the shape of the amplitude growth function and its compression characteristics. As for maplaw, the S-slope can reduce unpleasant background noise, although the S-slope maplaws cannot be exported. It is now possible to create nonstandard maplaw with text files. The bandpass filter now has an extended frequency band (200–8500 Hz).

Med-El’s newest device, the Pulsar100 contains electronics and an application-specific integrated circuit (ASIC) that provides the user with up to 5 days of battery life, with a flexible platform design for future cochlear implant technology. Currently, the Pulsar CI100 cochlear implant is programmed via CI Studio+ 2.01. It continues to use a CIS speech-coding strategy with Hilbert transformation, and has the potential for simultaneous stimulation to enhance acoustic detail and clarity without the disadvantages of channel interaction using intelligent parallel stimulation (IPS). It is hoped that parallel stimulation without channel interaction may be able to provide the user with more detailed information and signal replication leading to improved performance in difficult listening situations. The new software also offers simultaneous fitting, that is, a split screen and dual interfaces for use with bilateral implants, QuickFit programming, a interactive task that can provide a quick and accurate estimation of MCLs based on data derived from electrical stapedial reflex threshold (ESRT) measures, and a test button within the interface configuration function to be used to confirm communication between the programming interface and the computer.


The Nucleus 22-channel cochlear implant, the antecedent to the Nucleus 24, like the two previously described multichannel implants, can be programmed to fit the requirements of the individual implant recipient. Although not currently used in the United States, it merits some discussion because of its similarities to the Nucleus 24 and the large number of recipients still using the device. Dedicated software is used to deliver electrical stimulation and to measure T and C levels for each of the 22 implanted electrodes. The parameters are then combined with a predetermined speech-processing strategy to create a program or “MAP”. T and C level values can be affected by several important parameters, one of which, stimulation mode, determines the current flow within the cochlea. The wider the current spread, the more reduced the electrical T and C levels and, therefore, the fewer the electrode pairs available for stimulation. It is preferable to provide stimulation at as low an electrical level as possible because elevated stimulation levels reduce the overall efficiency of the system by slowing down the stimulation rate and reducing usable auditory information (Cochlear Corporation, 1996). Although monopolar stimulation is not available in the Nucleus 22 system, common ground stimulation is available. This mode allows any of the 22 electrodes to be used as active. When an active electrode is designated, all of the other electrodes are connected to form a single indifferent electrode, which will result in a more diffuse current flow. The common ground mode is useful as a diagnostic mode when checking the integrity of individual electrodes independently because it allows the clinician to select and stimulate individual electrodes to determine if an electrode can generate a response or if the response generated results in loudness growth. Common ground is often used during the initial stimulation of a young child with limited exposure to auditory input because it provides a more conservative approach to programming, as all electrodes are linked together in “common ground.” This increases the likelihood of aberrant electrodes being detected and therefore decreasing the possibility of unpleasant sound sensations. Common ground is not recommended for partial insertions because current will flow outside the cochlea and possibly cause a nonauditory stimulation.

The frequency-to-electrode allocation parameter allows the audiologist to assign a particular bandwidth to a given electrode. The allocation will depend on the speech-processing strategy used and the number of electrodes available for stimulation. For the spectral peak (SPEAK) strategy, the frequency bands are linearly distributed in the low frequencies to ˜1850 Hz and logarithmically distributed thereafter. The default frequency allocation table (electrode-to-bandwidth allocation) for 20 active electrodes as determined by the software is 9. Skinner et al (1994, 1997) suggested that a default frequency allocation table of 7 results in improved phoneme, word, and sentence perception possibly due to an increased number of electrodes assigned to frequencies below 1000 Hz, which allows for the more accurate perception of the change in formant frequencies for diphthongs and the recognition of the phoneme / m / as different from / l /.

In the SPEAK strategy each of the 20 programmable filters in the speech processor has a default gain of 8. Because the gain is applied to each filter prior to maximum selection, the gain settings affect which maximum is determined. Reducing the gain on a particular filter will de-emphasize that filter output.

The amplitude mapping algorithm is a nonlinear function determined by the interaction of the base level and the Q value. The base level controls the minimum input level that will produce electrical stimulation. The default value of the base level is 4 and is rarely changed. Increasing the base level increases the level of sound required to initiate electrical stimulation. The base level may be changed if a patient complains of unwanted environmental noise or perceives an internal electrical noise generated by the speech processor. In the past few years, improved speech processor design, with its reduced internal noise, has obviated the need to manipulate this parameter. When the function knob in the speech processor is set to the S position, the base level is automatically raised to 10. This serves to reduce the acoustic dynamic range of the processor from 29.5 dB (base level = 4) to 22.7 dB. Therefore, the patient should be advised to use caution when manipulating this parameter because speech perception can be adversely affected.

The Q value controls the steepness of the amplitude growth and determines the percentage of the patient’s electrical range devoted to the top 10 dB at the speech processor’s input range. As the clinician reduces the Q value, the amplitude growth function at the lower end of the speech processor’s input range becomes steeper. Reducing the Q value can serve to make soft sounds seem louder, including background noise. Increasing the Q value reduces the background noise but may result in the patient not hearing soft voices or speakers at a distance. Additionally, global modifications in the T or C levels can also result in louder or softer sound sensations. Increasing T levels globally results in louder sound sensations for the low-level acoustic signals, and may serve to reduce tinnitus. Conversely, decreasing T levels globally results in softer sound sensations for low-level input. Certainly increasing or decreasing C levels by a fixed percentage of the dynamic range results in louder or softer sound sensations. Research recently has focused on the effect of increased T levels on speech perception. Skinner et al (1998) compared methods for obtaining minimum stimulation levels (T levels) used in programming the Nucleus 22 cochlear implant with subjects using the SPEAK strategy. The study looked at the differences in speech perception, at different input levels, as a function of the method of setting minimum stimulation levels. Two programs were created; one at raised thresholds (m = +2.04 dB) and one at threshold (clinical default value) to determine if raised levels would improve recipients understanding of soft speech sounds with the SPEAK speech coding strategy. Results obtained suggested that use of a raised level program for the Nucleus 22 device users has the potential of improving speech perception at lower intensity levels.

The Nucleus C124M device has 22 active electrodes implanted within the cochlea and two remote grounds: a ball electrode implanted under the temporalis fascia muscle and a plate (ground) near the receiver-stimulator, which permits programming in a common ground, bipolar, or monopolar stimulation mode. Shortly after the Nucleus CI24M was introduced, a modification of this system called the Nucleus 24 Contour (CI24RCS) was introduced that had a precoiled electrode array. Using an array that “hugged” the modiolus allows for the possibility of closer proximity to the auditory nerve fibers. The goal was the reduction of current requirements and current spread, while increasing battery life and enhancing place specificity.

The SPRint (body) speech processor (Cochlear Corporation, Englewood, CA) holds four separate and distinct programs, allowing for flexibility in programming and has a personal or public audible alarm to alert others to device manipulations, especially important in the pediatric population because of the tendency for children to play with equipment.

The SPRint supports three different encoding strategies: advanced combination encoder (ACE), a hybrid strategy designed to combine SPEAK (large number of stimulation sites, dynamic channel selection, moderate rate, and improved frequency representation) with CIS (high stimulation rate, fixed channel selection, and improved resolution) (Cochlear Corporation,1998a–d). The C124RCS offers a choice of a 4-, 6-, 8-, or 12-channel CIS coding strategy from over 22 sites of stimulation. The Esprit (BTE) speech processor, which implements the SPEAK strategy, is capable of storing two separate programs, but cannot support the more advanced encoding strategies (CIS and ACE). The newer two-program ESPrit 3G now allows for the use of the CIS and ACE strategies, which make this device a more attractive choice for older children and adults than the body device.

Prior to initial stimulation, telemetry can be used to ensure the integrity of the electrodes. The C124RCS is capable of various types of telemetry to assess the internal functioning of electrodes as well as to measure the response of the whole nerve action potential (neutral response telemetry, discussed later in this chapter). Electrode impedance is used to detect short-and high-impedance electrodes that should not be used in MAP. If an electrode has high impedance, a voltage compliance problem may occur, which in turn will affect loudness growth. Under these circumstances reducing T and C levels by increasing the pulse width or using another stimulation mode (monopolar) can resolve the problems. If this fails, the electrode should be deactivated because the MAP will not convey appropriate loudness information.

After assessing electrode integrity, the speech-encoding strategy is chosen and a stimulation rate is selected. Generally, a decrease in T levels will occur with increasing rate. Patients should be encouraged to remain in a specific strategy for several weeks to allow for an adjustment period prior to fine-tuning secondary parameters. These secondary parameters for CIS and ACE include channel/electrode allocation (i.e., electrode set), gain/frequency response shaping, frequency table, order of stimulation, jitter, and Q value. Manipulating the electrode set may provide the most efficient means of affecting changes in sound quality. Sound quality complaints of uncomfortably high pitch can be improved by lowering the stimulation rate, incorporating a more apical electrode in the electrode set, and adjusting the low/highfrequency gain settings (Cochlear Corporation, 1997). Taking advantage of the multiprogramming strategy, especially in ACE, can be useful by fitting the patient with multiple programs of varying stimulation rates and maximums.

Managing the deleterious effects of background noise is a dilemma. The 3G (BTE) features the “whisper” setting, which is a fast-acting compression circuit that increases the range of input signal intensities coded by the speech processor. Clinical trials have shown a 15% improvement in scores for words presented at a reduced level in quiet, without a decrement in performance in noise. The Sprint (body) processor also employs adaptive dynamic range optimization (ADRO), which is a preprocessing scheme that continuously adjusts the gain in each frequency band to optimize the signal in the output dynamic range. This often results in improved overall sound clarity through better detection of soft speech, while maintaining comfort level for loud sounds.

Managing nonauditory side effects such as facial nerve stimulation, pain, and dizziness, irrespective of device, can be a challenging task and is the subject of ongoing research. For the Advanced Bionics device, lowering clipping levels or deactivating the channel can accomplish the desired effects. For the Nucleus device, these side effects can usually be programmed out by reducing current levels on a given electrode (reducing maximum stimulation level), bypassing that electrode, or using a variable-mode programming strategy to widen the stimulation mode. Variable mode allows the clinician to specify both the active and the indifferent electrode (i.e., channel of stimulation). This flexibility of combining different bipolar modes in the same MAP can result in a greater number of active channels than actual electrodes available for stimulation. Pseudo-monopolar mode, a form of variable mode, can be useful in partial insertions because it uses an extracochlear electrode as the indifferent electrode for all other intracochlear electrodes. Unlike typical stimulus flow configuration, this mode couples an indifferent electrode that is located basally, usually outside of the cochlea, to the active electrode.

Double-electrode MAPping also can be clinically useful with patients who have a partial insertion of the electrode array or a limited number of usable channels available for stimulation because it increases the overall frequency range that the speech processor analyzes. For example, if a patient has 12 stimulable channels, the overall bandwidth would be 240 to 4926 Hz using the default frequency allocation. Double electrode MAPping of the eight most basal channels increases the upper bandwidth limit by more than twofold, which can serve to improve the sound quality of high-frequency phonemes. The clinician, of course, may choose to double-electrode MAP the apical electrodes instead. All programming manipulation, however, should ultimately be based on the patient’s sound quality judgments.

Much has been discussed in the cochlear implant (CI) community “creative programming,” that is, the ability of clinicians to work “out of the box” in attempting to improve patient outcomes through innovative tweaking. For instance, Skinner et al (1998) demonstrated that using counted Ts during psychophysical testing can improve a patient’s perception of soft speech. The technique of requiring the patient to actually count the number of stimulations will typically raise the threshold level. Furthermore, a recent article by Fourakis et al (2004) showed that manipulation of the frequency allocation table (FAT) by shifting more electrodes toward the lower frequency range could improve vowel perception without compromising consonant recognition in adult Nucleus 24 users. Manipulation of T levels or gains at a formant frequency level have been employed by a handful of CI audiologists to reduce substitution errors. This technique requires that F0F1F2 of both misinterpreted sounds be identified in order to determine where the frequency differs. The audiologist would then increase the T levels of the electrodes responsible for these differences in an attempt to make the differences more apparent. This technique, on occasion, may have the desired effect possibly due to the correction of “T tails” (nonlinear growth of loudness on a particular electrode), a common phenomenon in psychophysical testing. This tweaking of different parameters can occur independently of or in combination with other parametric changes. This author would urge caution with this technique because overmanipulation might increase errors for other phonemes. Furthermore, there have been with no publications in the scientific literature that demonstrate efficacy with use of this procedure.

Because the Nucleus 24 uses monopolar stimulation there is less variability in T and C levels between adjacent electrodes. Plant and Psarros (2000) investigated the feasibility of measuring every second and fifth electrode while interpolating intermediate electrodes. The study was done at three different stimulation rates, and the results were compared with those obtained using a standard behavioral device programming technique. They found no significant T and C level differences for all stimulation rates as compared with behaviorally obtained (T/C) levels at every electrode. Minor adjustments of T and C levels were suggested, however, when increasing the distance between measured channels. Based on this work, a study was designed by Cochlear Americas (Denver, CO) to systematically evaluate some of the newer “streamlined” programming techniques and compare them to the traditional behavioral technique whereby every electrode was measured. The objectives of the study were threefold: to demonstrate equivalent speech perception outcomes between three streamlined programming techniques and traditional behavioral programming, to evaluate the clinician’s time during the objective versus the behavioral technique, and to provide the clinician with a more standardized programming approach for both adults and children.

Three techniques were used: first fit behavioral, first fit integrated OR, and first fit integrated initial activation (IA). The first technique involved interpolation of five behaviorally measured T levels with C levels set in live voice; the second technique involved using 5 NRT values obtained intraoperatively to set an overall profile with absolute T and C levels determined in live voice mode; and the third technique was identical to the second except that NRT was obtained at initial activation, not intraoperatively. Preliminary data suggest that it takes less time to create a first fit integrated OR or behavioral streamlined MAP as compared with a traditional behavioral programming MAP in adults (Chute and Popp, 2004). For the majority of subjects, however, there were few performance differences between the traditional behavioral and streamlined programming technique.

Managing Programming (MAPping) Complaints

Most patient complaints can be effectively managed simply by manipulation of the T and C levels, as these measures are the “building blocks” by which all additional manipulation is accomplished. One cannot, however, underestimate the effectiveness of counseling in the remediation of various complaints or misperceptions. At present, many cochlear implant centers maintain a large caseload and the counseling portion of their program may suffer from a lack of time. Counseling should be integrated into the entire CI process, by the entire implant team, during all time periods. This is especially important during presurgical counseling and the early stages of stimulation, as the clinician attempts to juxtapose a patient’s expectations with performance. It is not uncommon for a patient to feel that progress has been too slow during the initial stages of programming, and it is incumbent upon the team to help the patient through this time. Auditory training (critical for children, recommended for adults) can be a powerful tool in this process. Over the last few years, aural rehabilitation for adults has become more commonplace, either in the form of one-to-one therapy or through the use of computer technology.

Another issue that often arises during the course of programming is that a patient who has been making progress reaches a performance plateau. This may in some patients signal the need for a more aggressive parameter change, for example, speech coding strategy, repetition rate, etc. The programming audiologist should not be reluctant to institute such changes while closely monitoring the patient for any performance degradation. This process needs to be incorporated with the close collaboration of the patient’s therapist. There are occasions where a patient might best be served by seeing another audiologist at the center to offer a different perspective; taking a fresh approach to a patient’s programming issues often can be helpful.

One of the most common issues arising in the early stages of programming is the patient’s inability to effectively manipulate and understand the different functions of the volume and the sensitivity controls. The audiologist must explain, often repeatedly, that the comfort level (C/M level) is directly related to the volume control and that any manipulation by the volume control will influence loudness and that the sensitivity control can influence distance hearing and perception in noise. Additionally, as the process matures, parents, patients, and therapists must understand the appropriate use of the multiple program approach; that is, evaluating the performance of an individual with multiple programs should be done in a structured listening environment and is often performed by the child’s speech therapist. Each of the programs, usually differing by a single parameter (gain, maxima, input dynamic range, etc.), is assessed during a therapy session to determine the program that provides maximum speech understanding. Failure to do so can result in the use of a “stale” MAP, that is, with inappropriate T and C/M levels at the time a particular MAP is used.

There are auditory (MAPping) complaints that are universal, including, but not limited to, speech or environmental sounds being too loud/soft, other voices or the patient’s own voice having increased echo, background noises being inappropriately loud in relation to the speech signal, and voices being too high/low pitched. Actions for each of these scenarios are clearly outlined in the manufacturers’ technical service manuals, in addition to an extensive manufacturer support system for the programming audiologist.

Device Failures (Hard/Soft)

As the number of cochlear implant recipients has increased over the past 25 years, so has the number of device failures. Although failures still represent a very small percentage of the user population, the absolute numbers are large enough to require mention in this chapter. The speed at which a device failure is accurately diagnosed and corrected is critical to the process. Device failures can be divided into two categories: hard failures, where the internal device ceases to function entirely, and soft failures, where a device functions suboptimally, but a suspected mechanical or electronic malfunction exists that cannot be determined with current clinical diagnostic tools. Prior to suspecting a device failure, the clinician should check and replace the external hardware, and the device should be reprogrammed. With the advent of bidirectional telemetry, hard device failures have become relatively easy to diagnose. Following the initial diagnosis of a hard failure a device integrity test is typically performed, either by the CI audiologist or manufacturer representative. An integrity test measures the voltages generated by the biphasic current pulses at the electrode array, which can be accomplished by the use of surface electrodes. Concurrently, a plain film is obtained to assess electrode position and to compare this to the film taken intraoperatively.

Soft failures can be subdivided into long-term progressive or short-term progressive failures, which may be more elusive to diagnose. It is quite possible that an integrity test may not diagnose a soft failure and the implant team may need to consider explantation in the absence of electrophysiologic data to support device failure. Progressive failures may include electrode migration, intermittent electrode shorting, or open circuits, which may manifest in progressively poorer speech perception for the patient. This underscores the need for periodic speech perception evaluations to objectively compare pre-and postevent performance. This often expedites insurance approval for reimplantation. From an auditory standpoint, patients may complain of atypical tinnitus, buzzing, loud intermittent noises, clicking, roaring, etc. This may be manifested from a MAPping perspective by facial nerve stimulation, fluctuant T and C levels, reduction in usable channels, frequent manipulation of pulse width, narrow dynamic ranges, wide variation in impedance or compliance measures, and the need for frequent MAPping. For patients unable to provide valuable feedback, such as children, behavioral issues may be evidenced, such as pulling the device off, aggressive behavior, inexplicable crying, etc. More sophisticated diagnostic tools are required to more effectively diagnose device failures (hard and soft), so that appropriate measures may be taken in a more timely fashion. Reimplantation should occur as quickly as possible following the failure determination so performance is not compromised (Waltzman et al, 2004).

Objective Programming Techniques

As the criteria for selection for cochlear implant candidates broaden over time to include lowering of the age of implantation and the implantation of the developmentally delayed or disabled, the use of objective electrophysiologic measures to assist in device programming has taken on an increasingly important role. Historically, electrophysiologic measures have been used throughout the cochlear implant process: preoperatively, intraoperatively, and/or postoperatively (Shallop, 1993). They have been used preoperatively as a possible predictor of postoperative performance (Waltzman et al, 1991) or ear selection (Kileny et al, 1992) (promontory stimulation); intraoperatively, to assess device integrity and neural stimulation [electrically evoked auditory brainstem response (EABR), averaged evoked voltages (AEV), electrical stapedial reflexes (elicited acoustic reflex threshold, EART), neural response telemetry/imaging (NRT, NRI), and electrical impedances)]; and postoperatively, to assess device integrity (AEV and EABR) and to program the device (EABR, EART, and NRT/NRI). The following subsections focus on electrophysiologic measures as they related to postoperative device programming.

Evoked Auditory Brainstem Response

Several researchers have investigated the relationship between EABR thresholds and behavioral measures of T and C levels in Nucleus 22 channel cochlear implant users (Kileny, 1991; Mason et al, 1993; Shallop et al, 1990, 1991) and have found varying degrees of correlation between these measures. Shallop et al (1991) studied the relationship between intraoperative EABR thresholds and behavioral measures of thresholds and maximum comfort levels for 11 patients implanted with the Nucleus 22 device. They found EABR to more closely approximate the maximum C levels rather than the behavioral T levels, and on occasion to exceed behavioral C levels by more than 20 programming units. They suggested that differences between EABR thresholds and behavioral T levels might be partially due to differences in the rate of stimulation used for the two procedures. Mason et al (1993) studied the relationship between intraoperative EABR thresholds and the behavioral T levels for 24 children. They reported that EABR thresholds consistently overestimated T levels by an average of 35 programming units. The use of correction factors to improve this predicative model was moderately successful. Factors that confound these correlations include postoperative changes in the EABR growth function over time and reduction in impedance during the first few months of electrical stimulation.

In 1994, Brown et al studied the relationship between EABR thresholds obtained both intraoperatively and post-stimulation and behavioral T and C levels in 26 subjects— 12 postlingually deafened adults and 14 perilingually deafened children. These results suggest a strong correlation between EABR thresholds and behavioral T and C levels in those EABR thresholds that consistently fell within the behavioral dynamic range. Additionally, there was a correspondence between the configuration of the EABR thresholds and the configuration of the MAP, although there was no correspondence between EABR thresholds and the T or C levels. They concluded that, although EABR thresholds cannot be used as a predictor of behavioral T or C levels, they can be used as a conditioning tool in the difficult-to-program child. Because the configuration of the EABR thresholds versus electrode curve is a good indication of the configuration of the MAP, the programming audiologist can interpolate T and C levels on electrodes not obtained through behavioral methods. Gordon et al (2004b) confirmed that EABR could be reliably measured in children using the Nucleus 24 device and did not note any significant change in levels in the first 6 to 12 months of implant use. In summary, EABR if interpreted cautiously may provide a valuable starting point in the difficult to test population.

Elicited Acoustic Reflex Threshold

The feasibility of using electrically elicited acoustic reflex threshold (EART) as a tool in programming the difficultto-test population has also been explored. Jerger et al (1986) determined that it was indeed possible to elicit a stapedial reflex by electrical stimulation in a patient who received the Nucleus multichannel cochlear implant. In a follow-up study involving seven adult subjects (Jerger et al, 1988), behavioral C levels were close to the electrical reflex threshold and below reflex saturation in all subjects. These results have been echoed by others who have found good agreement between EART and behavioral C levels in adult cochlear implant users. In 1990, Hodges studied six patients in an attempt to correlate EART thresholds with C levels in Nucleus patients. She demonstrated a strong correlation between the measured EART and C levels. At the same time, Battmer et al (1990) studied the amplitude growth function of the EART in 25 subjects with the Nucleus 22. They reportedly were able to elicit an EART in 76% of patient studied. Amplitude growth was in agreement with the Jerger findings in that saturation was near the C level. In 1994, Spivak and Chute studied the relationship between behavioral C levels and EART in 35 Nucleus cochlear implant patients. The results suggested that the relationship between EART and C levels could vary considerably among subjects. First, EART were obtained for 69% of subjects (12 adults and 12 children); these results were similar to those of Battmer. Second, for the 31% of subjects for whom no EART was seen, no middle ear pathology was seen, which would suggest another mechanism might be responsible for the lack of response. Third and most importantly, close agreement between EARTs and C levels were seen in only 50% of subjects with an EART, while significantly overestimating or underestimating C levels for the other 50%. They postulated that the EART might prove to be a long-term predictor of the stable C levels that are reached within the first 6 to 9 months poststimulation; however, no data exist to support this hypothesis.

Gordon et al (2004a) proposed a method of obtaining comfortable stimulation through the use of EART, in Nucleus 24 users that who could not provide behavioral responses. Buckler and Overstreet (2003) demonstrated a systematic relationship between speech burst EARTs and HiResolution programming units. They found that speech burst EARTs are “highly correlated” with speech burst M levels in patients using HiResolution sound processing. This information, they further conclude, can be useful in setting M levels in populations where behavioral information is difficult to obtain (younger, long-term deafness, cognitively impaired, etc.). Several limitations of using EART as a tool for programming include, but are not limited to, the prevalence of middle ear disease, which can obliterate a response, and the need for the child to remain motionless during the 15 minutes it takes to obtain the EART for 20 electrodes. Despite some drawbacks, however, the use of an EART can provide the clinician with a starting point for psychophysical testing and provide information regarding maximum stimulation level.

Evoked Compound Action Potential

In 1990, Brown and Abbas demonstrated the ability to directly measure the electrically evoked compound action potential (ECAP) in Ineraid cochlear implant users. Until recently, the primary method of directly recording this potential was either through an Ineraid percutaneous plug or a temporary electrode in the cochlear (Brown and Abbas, 1990, 1996), and therefore did not find widespread application. The Nucleus C124M cochlear implant features an NRT system that allows for the measurement of whole-nerve action potentials. The system as described by Brown and Abbas (1990) “allows the voltage in a specific electrode pair to be recorded by a neighboring electrode after a stimulation pulse is presented.” This voltage is amplified and digitized and transmitted back to the stimulating computer, where it is averaged and the response waveform displayed. The NRT response is characterized by a single negative peak (N1) that occurs with a latency of about 0.2 to 0.4 ms following the stimulation set and a positive peak (P2). As the stimulus level is increased, the EAP amplitude increases. Growth and recovery of response can them be systematically evaluated. The ECAP has several advantages over the EABR assessing the response of the auditory system. First, the response is much larger than the EABR. Second, the intracochlear location of the recording electrode results in less contamination from muscle artifact (obviates the need for sedation). The lack of contamination by muscle artifact allows for incorporation of these tools into routine postoperative evaluation of an implanted child (American Speech-Language Hearing Association, 2004).

Brown et al (1994) studied the relationship between ECAP thresholds and MAP T and C levels in 22 postlingually deafened adults. They found that 36% (eight of 22) of the subjects had evoked action potential (EAP) thresholds that fell within ˜5 programming units of C levels; 50% (11 of 22) of the subjects had EAP thresholds that typically fell in the top half of the dynamic range; and 14% (three of 22) of the subjects had EAP thresholds that were 10 or more programming units higher than their C levels for the majority of electrodes tested. Brown et al (1997) demonstrated the ability to reliably record EAP responses for 17 of 19 adults and five of six children tested. No responses were obtained for one child who required a surgical drill-out and did not perceive any auditory stimulation with the device. They also found a strong correlation between EAP threshold and behavioral thresholds. Hughes et al (1998) suggest that, with the exception of initial stimulation, EAP thresholds in children using the C124M consistently fall within the MAP dynamic range, that is, between the T and C levels. Several other investigators have demonstrated that the ECAP is typically recorded at levels where the programming stimulus is audible to the child (DiNardo et al, 2003; Franck et al, 2001; Gordon et al, 2004a). Researchers have demonstrated that the contour of the ECAP thresholds across electrodes often follows the contour of the behavioral measures of M levels (Brown et al, 2000). Gordon et al (2002) assessed the ECAPs of 37 children who underwent implantation of the Nucleus 24 device between the ages of 12 and 24 months. They found the ECAPs to be of large amplitude, with tNRIs. If one draws a line through the input–output function and extrapolates down to the stimulus level that would elicit a threshold ECAP response, that level is the tNRI (Koch and Overstreet 2003) between behavioral T and C levels. A correction factor applied to the ECAP thresholds provided a useful prediction of T levels. They concluded that NRT could be used to ensure adequate auditory stimulation at initial stimulation even in this age group. Gordon et al (2004b) demonstrated that EABR and ECAP thresholds did not significantly change over the first 6 and 12 months of implant use, respectively, whereas ESRT thresholds increased. Hughes et al (2001) performed a longitudinal study to investigate the relationship among electrode impedance, ECAP, and behavioral measures in Nucleus 24 cochlear implant users and concluded that beyond the 1-to 2-month visit, children exhibited significant increases in electrode impedance, ECAP thresholds, slope, and MAP T levels, whereas these same measures in adults, remained stable.

Kaplan-Neeman (2004) evaluated the efficacy of NRT-based cochlear implant programming versus behaviorally based programming on MAP T and C levels and speech perception abilities in 10 congenitally deafened children between the ages of 12 and 39 months. The results suggest no significant differences between NRT-based versus behaviorally based MAPs. In fact, all studies suggest that if ECAP thresholds are to be used to assist in device programming, it is prudent to obtain those measures at the same visit as device programming rather than use measures previously obtained. Objective programming is currently so widespread that NRT data can be imported directly from a linked software application and used to generate objective MAPs. Three different techniques for using NRT data to generate MAP are built into the Nucleus software: progressive preset MAP, set T and C profile, and determine T/C offset. Advanced Bionics Corporation has recently developed a technique to measure the ECAP, referred to as neural response imaging (NRI). In contrast to the NRT in the Nucleus device, which uses a masker probe technique to cancel the stimulus artifact, NRI uses an alternating polarity approach to cancel out the rather large stimulus artifact. Investigations of 19 subjects who participated in the HiResolution clinical trial suggest that the average first NRI was at 85% of the M level, whereas tNRI was at 65% of the M level (Koch and Overstreet, 2003). Koch and Overstreet (2003) showed that the average levels required to elicit an ECAP, after appropriate conversion factors are applied, are similar across devices.

In summary, NRT and NRI are noninvasive, can be acquired in awake patients, and do not require commercial evoked potential averaging equipment, making them valuable tools in programming a variety of patients including children and the difficult-to-test. Further investigations are underway involving NRT/NRI to assess channel interactions and neural growth functions. Additionally, comparing NRI responses to evoked potentials from higher auditory centers may shed light on the entire auditory pathway in cochlear implant users (Koch and Overstreet, 2003).

Follow-Up Programming

Regardless of age, accurate electrical thresholds and comfort levels continue to be a main contributor to postoperative performance. Research has demonstrated that electrical thresholds can fluctuate during the first year following initial stimulation, emphasizing the need to set up a comprehensive programming schedule to ensure maximum benefit from the device. The following first year schedule has been suggested for children after initial stimulation: at 1 to 2 weeks, 4 to 5 weeks, 3 months, 4 to 5 months, 6 months, twice between 6 months and 12 months, and at 12 months. Subsequent visits usually occur at 3-month intervals (Shapiro and Waltzman, 1995). Additional programming sessions should be scheduled if certain changes in the child’s auditory responsiveness or speech production occur. These changes include, but are not limited to, changes in auditory discrimination, increased request for repetition, addition and/or omission of syllables, prolongation of vowels, and change in vocal quality. The actual length of time it requires for a child to adjust to a new program can vary greatly. It is therefore important that his or her therapist, teachers, and parents monitor the child’s speech perception/production to provide input to the programming audiologist, and the programming audiologist should provide the therapist with a copy of the changes made to the program so that they can work together to maximize performance. Although an important relationship exists between speech perception and speech production, a change in a child’s speech production may actually have little correlation with the need for a program adjustment. Additionally, all professionals involved with the child need to appreciate the limits of a cochlear implant; that is, not everybody will be an excellent performer. Typically, however, the most effective program is the one that requires minimal manipulation; obtaining accurate psychophysical measures will usually lead to optimal performance.

The programming timetable for adults is often not as strict and comprehensive as for children and should allow for greater autonomy on the part of the patient. The typical first year follow-up schedule for adults, after the initial stimulation is 10 days, 6 weeks, 3 months, 6 months, and 12 months, with visits once to twice per year thereafter, depending on the patient.

Device program changes are not the only determinants that contribute to postoperative performance. Age at implantation, family support, duration of deafness, communicative approach, cognitive ability, and length of device usage are but a few of the other variables that can affect performance, and ongoing counseling of patients/parents regarding all issues can lead to a more satisfied and optimal user.


Advanced Bionics Corporation. (1995). Clarion Device Fitting Manual. Sylmar, CA: Advanced Bionics

Advanced Bionics Corporation. (2003). New methodology for fitting cochlear implants. Valencia, CA: Advanced Bionics

American Speech-Language Hearing Association. (2004). Technical report: cochlear implants. ASHA Suppl 24:1–35

Battmer RD, Laszig R, Lehnhardt E. (1990). Electrically elicited stapedius reflex in cochlear implant patients. Ear Hear 11:370–374

Brown CJ, Abbas PJ. (1990). Electrically evoked whole-nerve action potentials: Data from human cochlear implant users. J Acoust Soc Am 88:1385–1391

Brown CJ, Abbas PJ. (1996). Electrically evoked whole-nerve action potentials in Ineraid Cochlear implant users: Responses to the different stimulating electrode configurations and comparison to psychophysical responses. J Speech Hear Res 39:453–467

Brown CJ, Abbas PJ, Fryauf-Bertschy H, Kelsay D, Gantz B. (1994). Intraoperative and postoperative electrically evoked auditory brainstem responses in Nucleus cochlear implant users: Implications for the fitting process. Ear Hear 15:168–176

Brown CJ, Hong SH, Hughes M, Lowder M, Parkinson W, Abbas PJ. (1997). Comparisons between electrically evoked whole nerve action potential (EAP) thresholds and the behavioral levels used to program the speech processor of the Nucleus C124M cochlear implant. Presented at the 7th symposium on cochlear implants in children, Iowa City, IA

Brown CJ, Hughes M, Luk B, Abbas P, Wolaver A, Gervais J. (2000). The relationship between EAP and EABR thresholds and levels used to program the Nucleus 24 speech processor: data from adults. Ear Hear 21:151–163

Buckler L, Overstreet E. (2003). Relationship Between Electrical Stapedial Reflex Thresholds and Hi-Res Program Settings: Potential Tool for Pediatric Cochlear-Implant Fitting. Valencia, CA: Advanced Bionics

Chute PM, Popp A. (2004). Preliminary results of mapping procedures in cochlear implant centers in North America. Streamlined Programming News. Denver, CO: Cochlear Americas

Cochlear Corporation. (1996). Technical Reference Manual. Englewood, CO: Cochlear Corporation

Cochlear Corporation. (1997). Clinical Bulletin. Englewood, CO: Cochlear Corporation

Cochlear Corporation. (1998a). Encoder Optimization Study. Englewood, CO: Cochlear Corporation

Cochlear Corporation. (1998b). Recommended ACE Fitting Strategy for SPEAK Conversion Patients. Englewood, CO: Cochlear Corporation

Cochlear Corporation. (1998c). Recommended CIS Fitting Strategy for SPEAK Conversion Patients. Englewood, CO: Cochlear Corporation

Cochlear Corporation. (1998d). Win DPS Programming Summary. Englewood, CO: Cochlear Corporation

Di Nardo W, Ippolito S, Quaranta N, Cadoni G, Galli J. (2003). Correlation between NRT measurement and behavioral levels in patients with the Nucleus 24 cochlear implant. Acta Otorhinolaryngol Ital 23: 352–355

Fourakis MS, Hawks JW, Holden LK, Skinner MW, Holden TA. (2004). Effect of frequency boundary assignment on vowel recognition with Nucleus 24 ACE speech coding strategy. J Am Acad Audiol 15:281–299

Franck KH, Norton SJ. (2001). Estimation of psychophysical levels using the electrically evoked compound action potential measured with the neural response telemetry capabilities of Cochlear Corporation’s CI24M device. Ear Hear 22:289–299

Garber S, Ridgely MS, Bradley M, Chin KW. (2002). Payment under public and private insurance and access to cochlear implants. Arch Otolaryngol Head Neck Surg 128:1145–1152

Gordon KA, Ebinger KA, Gilden JE, Shapiro WH. (2002). Neural response telemetry in 12-to 24-month old children. Ann Otol Rhinol Laryngol Suppl 189:42–48

Gordon K, Papsin BC, Harrison RV. (2004a). Programming cochlear implant stimulation levels in infants and children with a combination of objective measures. Int J Audiol 43(suppl 1):S28–S32

Gordon K, Papsin BC, Harrison RV. (2004b). Toward a battery of behavioral and objective measures to achieve optimal cochlear implant stimulation levels in children. Ear Hear 25:447–463

Henkin Y, Kaplan-Neeman R, Muchnik C, Kronenberg J, Hildesheimer M. (2003). Changes over time in electrical stimulation levels and electrode impedance values in children using the Nucleus 24M cochlear implant. Int J Pediatr Otorhinolaryngol 67:873–880

Hodges AV. (1990). The relationship between electric auditory evoked responses and psychophysical percepts obtained through a Nucleus 22 channel cochlear implant. Ph.D. Dissertation, University of Virginia, Charlottesville, VA

Hughes ML, Abbas PJ, Brown CJ, et al. (1998). Using neural response telemetry to measure electrically evoked compound action potentials in children with the Nucleus C124M cochlear implant. Presented at the 7th symposium on cochlear implants in children. Iowa City, IA

Hughes ML, Vander Werff KR, Brown CJ, Abbas PJ, Kelsay DMR, Teagle HFB, Lowder MW. (2001). A longitudinal study of electrode impedance, EAP, and behavioral measures in Nucleus 24 cochlear implant users. Ear and Hearing 22:471–486.

Jerger J, Jenkins H, Fifer R, Mecklenburg D. (1986). Stapedius reflex to electrical stimulation in a patient with a cochlear implant. Ann Otol Rhinol Laryngol 95:151–157

Jerger J, Oliver TA, Chimel RA. (1988). Prediction of dynamic range from stapedius reflex in cochlear implant patients. Ear Hear 9:4–8

Kessler DK. (1999). The CLARION Multi-Strategy Cochlear Implant. Ann Otol Rhinol Laryngol Suppl 1999;177:8–16

Kileny PR. (1991). Use of electrophysiologic measures in the management of children with cochlear implants: brainstem, middle latency, and cognitive (P300) responses. Am J Otol 12(suppl):37–42

Kileny PR, Zwolan TA, Zimmerman-Phillips S, Kemink JL. (1992). A comparison of round-window and transtympanic electric stimulation in cochlear implant candidates. Ear Hear 13:294–299

Koch D, Overstreet E. (2003). Neural response imaging; measuring auditory-nerve responses from the cochlea with the Hi-resolution bionic ear system. Advanced Bionics Technical paper, pp. 1–5

Loizou PC, Poroy O, Dorman M. (2000). The effect of parametric variations of cochlear implant processors on speech understanding. J Accoust Soc Am 108(2):790–802

Mason SM, Shepparp DS, Garnham CW, Lutman ME, O’Donoghue GM, Gibbin KP. (1993). Improving the relationship of intraoperative EABR thresholds to T-level in young children receiving the Nucleus cochlear implant. Paper presented at the 3rd International Cochlear Implant Conference, Innsbruck, Austria

Med-El Corporation. (1997). CIS PRO + Audiologist Manual. Vienna, Austria: Med-El

Plant K, Psarros C. (2000). Comparison of a standard and interpolation method of T-and C-level measurement, using both narrowly-spaced and widely spaced electrodes. Nucleus Technical Paper. Englewood, CO: Cochlear Corporation

Shallop JK. (1993). Objective electrophysiological measures from cochlear implant patients. Ear Hear 14:58–63

Shallop JK, Beiter AL, Goin DW, Mischke RE. (1990). Electrically evoked auditory brain-stem response (EABR) and middle latency response (EMLR) obtained from patients with the Nucleus multichannel implant. Ear Hear 11:5–15

Shallop JK, Van Dyke L, Goin D, Mischke R. (1991). Prediction of behavioral thresholds and comfort values for the Nucleus 22 channel implant patients from electrical auditory brainstem response test results. Ann Otol Rhinol Laryngol 100:896–898

Shapiro WH, Waltzman SB. (1995). Changes in electrical thresholds over time in young children implanted with the Nucleus cochlear implant prosthesis. Otol Rhinol Laryngol Suppl 104:177–178

Shapiro WH, Waltzman SB. (1998). Cochlear implant programming for children: the basics. In: Estabrooks W, ed. Cochlear Implants for Kids, pp. 58–68. Washington, DC: AG Bell

Skinner MW, Holden LK, Holden TA. (1999). Comparison of two methods for selecting minimum stimulation levels used in programming the Nucleus 22 cochlear implant. J Speech Lang Hear Res 42:814–828

Skinner MW, Holden LK, Holden TA. (1994). Effect of frequency boundary assignment on speech recognition with the Speak speech-coding strategy. Ann Otol Rhinol Laryngol Suppl 104(suppl 166):307–311

Skinner, MW, Holden, LK, Holden, TA. (1997). Parameter selection to optimize speech recognition with the Nuclear implant. Otolaryngol Head Neck Surg 117:188–195

Skinner, MW, Holden, LK, Holden, TA. (1998). Comparison of two methods for selecting minimum stimulation levels used in programming of the Nucleus 22 cochlear implant. Presented at the American Academy of Auditory Implants

Spahr AJ, Dorman MF. (2005). The effects of minimum stimulation settings for the Mel EL Tempo+ speech processor on speech understanding. Ear Hear 26(4 suppl):2S–6S

Spivak LG, Chute PM. (1994). The relationship between electrical acoustic reflex thresholds and behavioral comfort levels in children and adult cochlear implant patients. Ear Hear 15:184–192

Waltzman SB, Cohen NL, Shapiro WH, Hoffman RA. (1991). The prognostic value of round window electrical stimulation in cochlear implant patients. Otolaryngol Head Neck Surg 103:102–106

Waltzman SB, Roland JT, Waltzman MN, et al. (2004). Cochlear reimplantation in children: sot signs, symptoms and results. Cochlear Implants International 5:138–145

Aug 27, 2016 | Posted by in OTOLARYNGOLOGY | Comments Off on Device Programming
Premium Wordpress Themes by UFO Themes