Current Research on Music Perception in Cochlear Implant Users




The authors present a comprehensive review of the state of music perception with cochlear implant (CI) users. They discuss methods of assessment and results of studies of the aspects of music perception, melody, timbre, rhythm, and so forth in individuals with cochlear implants. They discuss neural mechanisms of music perception and the anticipation of broader acceptance of standardized tests for music perception in CI users.


Cochlear implantation has undergone tremendous evolution over the past 5 years, with advances in processing strategies, electrode design, and a greater push toward bilateral implantation. Despite these many advances, however, music perception in cochlear implant (CI) users remains essentially limited, with severe deficits observed in pitch and timbre perception. Although rare individual examples can be found who demonstrate high-level music performance through a cochlear implant, even users with excellent speech understanding typically display major limitations in their ability to perceive (much less perform) music. Recent research has led to a more rigorous development of psychophysical test methods and parameters to assess music perception in implant users. These methods have enabled the analysis of perceptual difficulties experienced by implant users for increasingly complex (and realistic) musical stimuli. Furthermore, neuroimaging experiments have helped characterize the central neural correlates in implant-mediated music perception in comparison with individuals with normal hearing (NH). This article reviews some of the notable recent advances in music perception for CI users.


Overview of recent studies


Exposure to Music


A growing body of literature over the past decade has contributed to a thorough characterization of the limitations of music perception demonstrated by CI users. Many of the early studies described significant impairment in melody perception for CI users, whereas rhythmic stimuli seemed to remain relatively intact. Recent studies have both confirmed and extended these initial findings. Migirov and colleagues used a simple questionnaire method to assess listening habits in postlingually deafened CI users, demonstrating that about 50% of the subjects tested described lesser enjoyment of music following implantation. Much of this difficulty is attributable to implant user difficulty following melodic contours, an ability that deteriorates significantly in the presence of competing instrumentation. Unfortunately, these abilities do not seem to improve over time as a result of regular auditory exposure, as demonstrated in a longitudinal study of music perception and appraisal. Instead, a growing number of studies have shown the importance of musical training for the improvement of pitch perception. These studies have included prelingually deaf children with CI’s who lack a great deal of prior music experience and, rather than finding music disconcerting, tend to find it enjoyable and interesting while also showing improvements over time. During pitch tasks, it seems increasingly clear that musical experience leads to better performance in implant users in comparison with individuals without musical experience, with near equivalent performance on some tasks (eg, melodic contour identification) in comparison with NH listeners. These trends are supported by results showing that NH listeners can display improved music performance during perception of implant simulations.


Auditory Training Programs for Music Perception


The implications of improved performance as a result of musical experience point obviously to the need for further development of auditory training programs directed toward music perception. Looi and She used a questionnaire approach to 100 CI users that focused on obtaining information germane to the development of a training program specifically oriented toward music. Thus far, music plays a minor role (if any) during routine postimplant rehabilitation, which is typically focused toward speech perception. Music is regarded in this sense as a lower-priority objective. However, it should be emphasized that music remains an acoustically richer and, therefore, more challenging stimulus than speech, and consequently, improved performance on music tasks (through music rehabilitation) is likely to have a broad beneficial impact on implant-mediated listening well beyond musical stimuli per se. In conjunction with the need to develop rehabilitation strategies and programs, individual filter strategies aimed at improving pitch mapping, technological advances in sound processing strategies, and bilateral CI all show great promise regarding improving music perception in implant users.




Methods of assessment: the development of music perception tests for CI listeners


Many recent studies have focused on the fact that few standardized measures of music performance exist for CI users. It seems ironic that although studies of the mechanisms of temporal and place pitch perception with electrical stimulation of the cochlea represent some of the earliest experiments performed on implant recipients, subsequent interest in music perception with these devices would take more than a decade to develop and would not truly flourish until the following decade. The reason for this is, of course, that once pitch perception with CI was documented, speech perception became the next target of both clinicians and scientists in the field. Interest in music perception only had the opportunity to arise once it became clear that speech perception was not only possible but was an expected clinical outcome in most implant recipients. From the beginning of interest in music perception with implants, testing could be divided into the more psychophysical approach whereby performance was rigorously tested on a variety of materials and questionnaire-based assessments of music appreciation. This review focuses on the former, although it is clear that enjoyment and appreciation of music as measured by a questionnaire may have little relationship to the accuracy of musical perception as measured by a perceptual test battery. This consideration may lead some to appropriately question the relevance of perceptual testing; however, it is difficult for these authors to imagine that restoration of normal musical perceptual ability in CI users would not ultimately lead to normal appreciation. In addition, perceptual testing correlates highly with speech perception in quiet and noise as well as a variety of clinically important psychophysical tasks. Maximizing hearing benefit from CI requires perceptual testing to measure the effects, or lack thereof, of a given intervention, whether it be training or technology.


Perceptual Results


Early perceptual results (see McDermott for review) made it clear that CI users performed comparably with NH listeners on rhythmic tasks; extremely poorly, indeed near chance, on melody tasks when deprived of lyrics or rhythmic cues; and poorly, although typically better than chance, on timbre tasks as assessed by musical instrument identification. These results have been confirmed in a later series. It was also clear that musical training has the potential to improve music perception, although the potential limits of training benefits were unknown and remain so today. Lastly, addition of low-frequency acoustic information seems to be substantially beneficial for melody perception either through hybrid or electroacoustic hearing in the ipsilateral ear or bimodal hearing via the contralateral, unimplanted ear.


Basic Metrics


Music is a complex language involving a rich and diverse interplay of pitch, timbral, rhythmic, harmonic, lyrical, and interval-based perceptual features. As the limitations of CI in representing some of these features have become clear, appropriate methods for the development of music perception test materials can be better defined. These definitions do involve certain testing trade-offs, but determination of basic performance metrics for CI users is critical to the future development of speech processing strategies as well as to understand. As in speech testing, open-set song identification would seem the most clinically pertinent test; however, because songs potentially include lyrics, speech perception ability will artifactually enhance performance and potentially allow patients with no melody perception but excellent speech discrimination to score at 100% on such a task. Because rhythmic perception is normal or near normal in most implant users, rhythmic cues can have a similar effect and, thus, need to be removed if the goal of testing is to measure melody discrimination. Such isochronous melody perception is extremely challenging for most implant users and, therefore, a closed-set test is necessary to minimize floor effects. Such tests have been developed and used by a wide variety of investigators and typically consist of nursery rhymes and other simple and highly familiar melodies. Across-language versions of such tests have also been described.


Isochronous Melody Perception


Although isochronous melody perception is a central component of many musical test batteries, it has significant limitations. Because many CI users are able to discriminate pitch direction for 1-semitone stimuli, isochronous melody testing is necessary to avoid the ceiling effects of pitch testing alone; however, as mentioned, the test is too difficult for many CI users, is closed-set, and depends on familiarity with the melody corpus. Recently proposed alternatives that potentially avoid some of these issues include a Melodic Contour Identification test and a Modified Melodies test. These tests attempt to assess the interval perception that underlies the discrimination of melodies. Failure to accurately assess intervals likely explains why melody perception with implants is so poor despite many users being readily able to discriminate 1-semitone pitch changes; however, it can be difficult to directly assess interval perception when one is not working with a musically trained patient. Use of melodies and melodic contours as test materials also successfully sidesteps potential uncertainties and complexities in the definition of pitch as obtained with a CI by intrinsically defining it as the capacity to represent melodic sequences. Elimination of floor effects through the use of the Melodic Contour Identification test has permitted measuring the effects of competing instruments on melody perception, a test arguably similar in concept to a speech perception test in noise.


Music Timbre Perception


Musical timbre perception is another critical piece of any music perception test battery. It is typically based on closed-set identification of commonly recognized musical instruments. The development of such materials is complicated by the difficulty in controlling for loudness and other cues introduced by whatever musical sources make up the test corpus. Some of these problems could potentially be solved through recent efforts to vary timbre electronically but that remains to be proven. Complex interplay between timbre and melody perception have been identified and provides a sense for just how intricate the assessment of polyphonic music perception with CI could become. This point argues for the continued use of the simplest validated test materials available when testing in a clinical environment, assuming the materials adequately encompass the breadth of clinical performance. Although a wide variety of other music test materials have been developed, few have the combined attributes of being simple and fast and having undergone the validation studies needed for robust clinical trials.




Methods of assessment: the development of music perception tests for CI listeners


Many recent studies have focused on the fact that few standardized measures of music performance exist for CI users. It seems ironic that although studies of the mechanisms of temporal and place pitch perception with electrical stimulation of the cochlea represent some of the earliest experiments performed on implant recipients, subsequent interest in music perception with these devices would take more than a decade to develop and would not truly flourish until the following decade. The reason for this is, of course, that once pitch perception with CI was documented, speech perception became the next target of both clinicians and scientists in the field. Interest in music perception only had the opportunity to arise once it became clear that speech perception was not only possible but was an expected clinical outcome in most implant recipients. From the beginning of interest in music perception with implants, testing could be divided into the more psychophysical approach whereby performance was rigorously tested on a variety of materials and questionnaire-based assessments of music appreciation. This review focuses on the former, although it is clear that enjoyment and appreciation of music as measured by a questionnaire may have little relationship to the accuracy of musical perception as measured by a perceptual test battery. This consideration may lead some to appropriately question the relevance of perceptual testing; however, it is difficult for these authors to imagine that restoration of normal musical perceptual ability in CI users would not ultimately lead to normal appreciation. In addition, perceptual testing correlates highly with speech perception in quiet and noise as well as a variety of clinically important psychophysical tasks. Maximizing hearing benefit from CI requires perceptual testing to measure the effects, or lack thereof, of a given intervention, whether it be training or technology.


Perceptual Results


Early perceptual results (see McDermott for review) made it clear that CI users performed comparably with NH listeners on rhythmic tasks; extremely poorly, indeed near chance, on melody tasks when deprived of lyrics or rhythmic cues; and poorly, although typically better than chance, on timbre tasks as assessed by musical instrument identification. These results have been confirmed in a later series. It was also clear that musical training has the potential to improve music perception, although the potential limits of training benefits were unknown and remain so today. Lastly, addition of low-frequency acoustic information seems to be substantially beneficial for melody perception either through hybrid or electroacoustic hearing in the ipsilateral ear or bimodal hearing via the contralateral, unimplanted ear.


Basic Metrics


Music is a complex language involving a rich and diverse interplay of pitch, timbral, rhythmic, harmonic, lyrical, and interval-based perceptual features. As the limitations of CI in representing some of these features have become clear, appropriate methods for the development of music perception test materials can be better defined. These definitions do involve certain testing trade-offs, but determination of basic performance metrics for CI users is critical to the future development of speech processing strategies as well as to understand. As in speech testing, open-set song identification would seem the most clinically pertinent test; however, because songs potentially include lyrics, speech perception ability will artifactually enhance performance and potentially allow patients with no melody perception but excellent speech discrimination to score at 100% on such a task. Because rhythmic perception is normal or near normal in most implant users, rhythmic cues can have a similar effect and, thus, need to be removed if the goal of testing is to measure melody discrimination. Such isochronous melody perception is extremely challenging for most implant users and, therefore, a closed-set test is necessary to minimize floor effects. Such tests have been developed and used by a wide variety of investigators and typically consist of nursery rhymes and other simple and highly familiar melodies. Across-language versions of such tests have also been described.


Isochronous Melody Perception


Although isochronous melody perception is a central component of many musical test batteries, it has significant limitations. Because many CI users are able to discriminate pitch direction for 1-semitone stimuli, isochronous melody testing is necessary to avoid the ceiling effects of pitch testing alone; however, as mentioned, the test is too difficult for many CI users, is closed-set, and depends on familiarity with the melody corpus. Recently proposed alternatives that potentially avoid some of these issues include a Melodic Contour Identification test and a Modified Melodies test. These tests attempt to assess the interval perception that underlies the discrimination of melodies. Failure to accurately assess intervals likely explains why melody perception with implants is so poor despite many users being readily able to discriminate 1-semitone pitch changes; however, it can be difficult to directly assess interval perception when one is not working with a musically trained patient. Use of melodies and melodic contours as test materials also successfully sidesteps potential uncertainties and complexities in the definition of pitch as obtained with a CI by intrinsically defining it as the capacity to represent melodic sequences. Elimination of floor effects through the use of the Melodic Contour Identification test has permitted measuring the effects of competing instruments on melody perception, a test arguably similar in concept to a speech perception test in noise.


Music Timbre Perception


Musical timbre perception is another critical piece of any music perception test battery. It is typically based on closed-set identification of commonly recognized musical instruments. The development of such materials is complicated by the difficulty in controlling for loudness and other cues introduced by whatever musical sources make up the test corpus. Some of these problems could potentially be solved through recent efforts to vary timbre electronically but that remains to be proven. Complex interplay between timbre and melody perception have been identified and provides a sense for just how intricate the assessment of polyphonic music perception with CI could become. This point argues for the continued use of the simplest validated test materials available when testing in a clinical environment, assuming the materials adequately encompass the breadth of clinical performance. Although a wide variety of other music test materials have been developed, few have the combined attributes of being simple and fast and having undergone the validation studies needed for robust clinical trials.




Toward a more specific characterization of musical deficits


In addition to the large number of studies confirming deficits in melody and timbre perception and the preservation of rhythm perception in CI users, 2 studies were recently performed specifically to address the perception of polyphony and rhythmic clocking in CI users. These studies were motivated by the need to study pitch perception beyond melody recognition in a way that was musically relevant and not previously examined and also to study rhythm perception using an ostensibly more difficult task to avoid ceiling effects observed in most studies of rhythm.


CI Users Demonstrate Perceptual Fusion of Polyphonic Pitch


Most studies of pitch perception in CI users have focused on pitch discrimination (in which subjects are asked to detect whether 2 sounds differ in pitch) and pitch ranking (in which subjects are asked to identify which of 2 presented pitches is higher). Relatively little research, however, has been done on the perception of polyphony (or harmony) in CI patients. In a recent study, the investigators sought to evaluate the ability of postlingually deafened adult CI users to perceive the number of pitches in acoustically presented polyphonic stimuli. Subjects listened to stimuli consisting of 1, 2, or 3 simultaneous tones with different fundamental frequencies within a single octave. Both pure tones and piano tones were used for stimuli. The investigators hypothesized that CI users would show decreased ability to differentiate between single versus multiple tones in comparison with NH controls. It was further hypothesized that the ability of CI users to detect polyphony would increase as a function of interval distance between pitches.


Pitch Study Methods


Twelve CI users and 12 NH controls participated in the study. Each CI user had at least 1 year of experience using their implant system and none of the subjects had musical training beyond an amateur level. All stimuli consisted of pitches from within a central octave ranging in f 0 from 261 Hz (C4) to 523 Hz (C5). Single-pitch stimuli consisted of either pure tones or piano tones from C4 to B4 (12 unique pitches, 24 total stimuli). Two-pitch stimuli consisted of either pure tone or piano tone representations of all 12 possible intervals within the range of C4 to C5 (1–12 semitones interval distance, 24 total stimuli). Three-pitch stimuli consisted of either pure tone or piano tone representations of 6 unique symmetric chords (equal interval spacing between lower/middle and middle/higher pitches) within the range of C4 to C5.


Stimuli were randomly presented in a soundproof booth through a calibrated loudspeaker. Stimuli were presented in a 3-alternative, single-interval, forced-choice procedure in which the subjects were instructed to choose whether the given stimuli consisted of 1, 2, or 3 pitches. Subjects were familiarized with the stimuli and the procedure before formal testing. No feedback was given regarding the correctness of responses. The number of correct responses for each subject was averaged across the separate tone and pitch-number conditions to obtain an overall mean score.


Pitch Study Results


The CI group scored significantly lower than the NH group for all conditions. The overall mean scores for each subject group were 43.1 ± 12.3% for CI users and 66.9 ± 9.4% for NH subjects ( P <.001, unpaired t -test). The CI group scored close to chance levels when identifying 2- and 3-pitch stimuli. In comparison, the NH group was much more successful at distinguishing single from multiple pitches but had greater difficulties at distinguishing between 2- and 3-pitch stimuli. Although NH subjects often identified 3-pitch stimuli as having 2 pitches, CI subjects often identified both 3-pitch and 2-pitch stimuli as a single pitch. For 3-pitch conditions, there was no apparent relationship between interval spacing and the ability to detect polyphony in CI users. For 2-pitch conditions, increased interval spacing did not lead to better performance for detection of polyphony. In fact, an inverse relationship was suggested for identification of the 1-semitone interval spacing in 2-pitch conditions (minor second interval) for which CI users were nearly as accurate as NH subjects.


The results from this study show that CI users obtain significantly lower average scores than NH subjects when asked to distinguish between single and multiple acoustically presented tones. Although a listener’s ability to identify the number of components in a polyphonic stimulus does not necessarily correspond to one’s ability to perceive differences between polyphonic stimuli, perceptual fusion of polyphonic pitch likely impairs CI users in accurately perceiving many features of music that are polyphonic in nature, such as harmony, consonance, dissonance, and tonality. No statistically significant difference was found between average scores for pure tones and piano tones across both subject groups. This finding indicates that the presence of additional pitch information in complex tones may not aid either subject group in the resolution of polyphonic pitch. Because most music is polyphonic in nature rather than monophonic, these findings underscore the need for further research and development of processing strategies directed toward the perception of polyphony.


Rhythmic Clocking Ability Remains Intact in CI Users


Although many studies suggest that CI-mediated rhythmic perception is normal, it must be emphasized that these studies have used simple tasks, such as rhythmic pattern identification, that may not have been sensitive enough to reveal limitations in rhythmic perception for CI users. In a recent study, the investigators sought to design a task of rhythm perception for CI users that exceeded the temporal processing requirements for simple pattern recognition or tempo differentiation by focusing on the concept of internal rhythmic clocking. Rhythmic clocking deals with the capacity of a regular interval stimulus to induce a temporal clock in a listener. Also known as beat perception or synchronization, rhythmic clocking refers to the extrapolative expectancy that is established with as few as 3 isochronous beats when internal rhythmicity is intact. Rhythmic clocking is an integral concept in both music and spoken language.


Rhythm Study Methods


In this study, the investigators devised a test of rhythmic clocking in which subjects were presented with 4 percussive beats, the first 3 of which were perfectly regular in temporal interval spacing. The fourth beat was presented either isochronously or anisochronously, slightly before or after the anticipated downbeat. Subjects were asked to identify whether the fourth beat occurred early, late, or in perfect timing with respect to the expected rhythmic clock produced by the first 3 isochronous beats. A subject group of highly trained conservatory musicians (MUS) was also included in the study because it was hypothesized that individuals with significant musical training would perform superiorly in rhythmic clocking tasks in comparison.


Twelve NH individuals, 12 cochlear implant users, and 7 highly trained musicians with NH participated in the study. No CI subjects had significant musical experience before or after implantation. Each category of auditory stimuli consisted of 4 percussive beats, presented as either snare drum hits or white noise bursts. The first 3 beats were perfectly isochronous, followed by a fourth beat that was either slightly early, isochronous, or slightly late. The 4 beats were presented at tempos of 60 (slow), 120 (medium), and 180 (fast) beats per minute. For the fourth beat, deviations were introduced before (early) or after (late) the isochronous position; 3° of deviation were tested (1/16, 1/8, or 3/16 fractions of a beat). Each subject was presented with a set of 84 stimuli in a soundproof booth through a single calibrated loudspeaker. Stimuli were presented in a single-interval, 3-alternative, forced-choice procedure in which the subjects were required to indicate whether the fourth beat in the given stimulus was early, isochronous, or late.


Rhythm Study Results


The results of the rhythmic clocking tasks show that CI users performed comparably with NH participants across all tempos, sound sources, and degrees of deviation (overall mean scores for correct identification of the final beat: CI users [56.4 ± 13.93%] and NH subjects [51.5% ± 13.82%]; P = .143, unpaired t -test). However, the musician group scored significantly higher than either group, with a mean score of 70.9% ± 5.95% ( P <.0001 for both MUS vs CI and MUS vs NH). An ANOVA of the subject group and tempo as factors showed statistically significant differences among the NH, CI, and MUS groups at all tempos (tempo 60 beats per minute [bpm], F(2,28) = 4.55, P = .019; tempo 120 bpm, F(2,28) = 3.71, P = .037; tempo 180 bpm, F(2,28) = 4.60, P = .019). Although faster tempos were harder for all groups, performance differences between the groups were also preserved at these faster tempos. Surprisingly, no significant differences between NH and CI subjects were noted when early and late stimuli were separated according to degree of deviation, suggesting that CI subjects performed as well as NH subjects for even the most difficult stimuli. MUS significantly outperformed NH subjects in the ±1/8 and ±3/16 final beat deviations, suggesting that musical experience and training may be related to improved performance.


It is important to note that the task used here differed significantly from the identification of beat patterns, a task that can be performed successfully even with impaired rhythmic clocking because minor temporal irregularities on the scale of milliseconds do not distort basic rhythmic patterns and rhythmic patterns are robust to temporal degradation. Because of the small degree of temporal deviation used in these stimuli and responses were based on only 3 preceding isochronous beats, it was thought that this test would be sensitive enough to identify any true performance differences for the CI and NH groups. Interestingly, the authors found that CI subjects performed as well as NH controls during this task, with no difference in performance along any parameter. These results lend strong support to the growing body of literature that suggests that rhythm perception is largely intact in CI users. It should be emphasized here that although the present study used a more difficult rhythmic task than other studies (NH subjects averaged only 51.49% correct), it is still possible that an even more complex task, such as polyrhythm detection, could indeed reveal subtle differences in rhythmic performance for CI users and NH groups.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Apr 1, 2017 | Posted by in OTOLARYNGOLOGY | Comments Off on Current Research on Music Perception in Cochlear Implant Users

Full access? Get Clinical Tree

Get Clinical Tree app for offline access