Audiologic, Vestibular, and Radiologic Procedures

3


Audiologic, Vestibular, and Radiologic Procedures


INTRODUCTION


The informal assessment of audiologic difficulties can be traced back to a date as early as 377 BC when Hippocrates first reported clinical observations of hearing loss (Vogel, McCarthy, Bratt, & Brewer, 2007). The actual examination of the type of hearing impairment can be dated to the late 1700s when test measures such as tuning fork tests were first employed. More precise methods of audiologic assessment were introduced in the 1920s with the advent of the Western Electric 1-A audiometer, which permitted better definition of the type of hearing loss and also provided a means to evaluate the extent and configuration of the hearing loss. Advances in technology have changed dramatically over the years; however, many of the fundamental principles of audiologic assessment remain the same. Therefore, for readers who are not familiar with traditional audiologic assessment measures, a review of the fundamentals of peripheral assessment certainly deserves attention. New advances in our understanding of the role that the central auditory nervous systems (CANS) plays in hearing, coupled with the development of specialized electrophysiologic and electroacoustic equipment that permits the assessment of CANS integrity and processes emerged in the 1970s. These advances have opened up new assessment opportunities for the audiologist and provided better diagnostic information for the patient. Therefore, a review of central auditory assessment procedures is also essential. It is beyond the scope of this chapter to provide all of the details necessary for a thorough understanding of audiologic test procedures; instead, this chapter provides a general overview of relevant and more commonly used evaluation measures. In addition to the review of both peripheral and central auditory test procedures, a brief overview of various vestibular and radiologic assessment procedures is provided in this chapter as these are often used in conjunction with audiologic testing to determine the anatomic site of lesion and/or the presence of comorbid vestibular system abnormalities that are frequently noted in patients with hearing loss.


AUDIOLOGIC ASSESSMENT


Examination of Hearing Sensitivity and Speech Recognition Ability


The audiometer is the primary instrument used by audiologists in the assessment of peripheral hearing sensitivity. It allows the audiologist to evaluate a patient’s sensitivity for a variety of different sound stimuli such as pure tones and speech. When performing a hearing evaluation, results are displayed on a pure-tone audiogram (Figure 3–1). The audiogram is a graphic representation of hearing, which is plotted as thresholds (i.e., lowest intensity level at which a stimulus is audible) in decibels (dB hearing level; dB HL) as a function of frequency, which is measured in hertz (Hz). The frequencies typically evaluated include the octave frequencies from 250 through 8000 Hz. The reason for this is that most of the sounds critical for the understanding of speech fall within this particular frequency range. In some cases, however, additional frequencies may be tested. Such cases would include the assessment of patients presenting with complaints of tinnitus, those individuals at risk for noise-induced hearing losses, and patients who are being monitored for potential ototoxic effects of medications. For those patients who report experiencing tinnitus, matching procedures may be completed in an effort to determine the frequency and level of the patient’s tinnitus, which may not occur at one of the octave frequencies routinely tested. Testing of the interoctave frequencies of 3000 and 6000 Hz is frequently completed when a patient presents with a history of excessive noise exposure, and in the case of the patient who is being monitored for potential damaging effects of ototoxic medication(s), the ultra-audiometric frequencies (i.e., frequencies above 8000 Hz) are often tested.


Several different classification systems have been recommended for quantifying the degree of hearing loss (see Clark, 1981; Goodman, 1965; Jerger & Jerger, 1980), and many clinicians use a combination of these classification systems to describe the degree of hearing impairment. An example of the classification system that we will be using in this text is provided in Figure 3–2. In addition to quantifying the degree or severity of hearing loss, the audiologist will determine the type of hearing loss. It is generally accepted that there are three types of hearing loss: (1) conductive hearing loss, (2) sensorineural hearing loss, and (3) mixed hearing loss. Conductive hearing losses are hearing losses that occur because of a loss of sound conduction from the outer ear to the inner ear, whereas hearing losses that are sensorineural in nature are the result of cochlear and/or retrocochlear involvement. Mixed hearing losses are exactly what the name would suggest, that is, a combination of a conductive and sensorineural hearing loss.


The audiologist’s role is to accurately measure a patient’s behavioral hearing sensitivity, and in doing so, to determine if a hearing loss is present. Once a hearing loss has been diagnosed and the type, degree, and configuration of the hearing loss is determined, the audiologist is able to implement appropriate referrals and (re)habilitation efforts. However, one must keep in mind that the pure-tone audiogram is only one method for the evaluation of hearing, which provides information primarily about peripheral auditory system involvement. Although the pure-tone audiogram is the cornerstone of audiologic assessment, its greatest limitation is that it provides limited information with respect to the processing and subsequent comprehension of complex auditory information (i.e., it is simply an auditory detection measure). Patients can have significant involvement of the central auditory nervous system (CANS) and still present with normal audiometric thresholds (Bocca, Calearo, Cassinari, & Migliavacca, 1955; Karlin, 1942; Musiek, Shinn, Chermak, & Bamiou, 2017).




Speech audiometry is another critical component of traditional audiologic assessment. It provides additional information with respect to the functional performance of the auditory system. Typically, two speech measures are made during this assessment: (1) speech recognition threshold (SRT) or alternatively a speech awareness threshold (SAT), and (2) word recognition performance at suprathreshold levels. The SRT is determined by presenting a closed set of two-syllable (spondee) words and determining one’s threshold for speech. In this test procedure, the patient either repeats the stimuli presented or points to pictorial representations of the presented stimuli. For young children and other difficult-to-test populations who are not able to complete the SRT test procedure, SATs are utilized. In this measure, the individual being tested does not have to recognize the stimulus being presented, but, rather, only has to detect that a signal is being presented (in this case, the signal is a speech signal as opposed to a pure-tone signal). The SRT should be in good agreement (±8 dB) with the pure-tone average (PTA) and can be compared to either a three-frequency PTA (i.e., the average of hearing thresholds obtained at 500, 1000, and 2000 Hz) for patients with relatively flat hearing losses, or to the better two-frequency average for patients who have steeply sloping hearing losses. However, if an SAT measure is derived rather than an SRT, the SAT often is obtained at a lower intensity level than the SRT would be for reasons explained previously. As a result, there is likely to be a greater difference between the SAT and the pure-tone average when this speech threshold measure is derived.


In addition to speech threshold measures, speech or word recognition testing is completed by evaluating the percentage of words that patients are able to identify correctly at suprathreshold levels. This suprathreshold level is typically between 30 and 50 dB above either the SRT/SAT or the PTA, but a lower presentation level may be required for patients who present with “sensory” hearing losses and significant amounts of recruitment (the abnormal growth in loudness perception as intensity increases). For a patients with normal hearing or conductive hearing loss, speech recognition scores are typically excellent (90% or greater). For those with sensorineural hearing loss, these scores typically fall below (90%) and will often decrease as hearing loss increases. However, although there is a general trend for speech recognition scores to decrease as the degree of hearing loss increases, it is important to recognize that there is some variability in the performance of individuals with hearing loss on speech recognition tests. As a result, there is no way to predict the speech recognition scores of an individual based upon his or her hearing sensitivity as one can observe a patient with a more severe sensorineural hearing loss having more favorable speech recognition scores than a patient with a much less severe hearing loss. Speech audiometry provides important information about the patient’s speech recognition abilities, which can play a significant role in helping to establish the best rehabilitative approach for a patient’s care. For example, adult cochlear implant evaluation candidacy is not based primarily on pure-tone thresholds, but rather on poor speech recognition abilities (Arnoldner & Lin, 2013).


Immittance Audiometry


Immittance audiometry is an important tool in diagnostic audiology and otology. It provides information that can aid in the detection of a variety of outer and middle ear auditory disorders as well as in the documentation of normal/abnormal cochlear and lower brainstem function. For the purposes of this book, we briefly orient the reader to the two primary measures of immittance audiometry: tympanometry and acoustic reflex thresholds.


Tympanometry


Tympanometry allows clinicians to measure the amount of compliance of the tympanic membrane and function of the middle ear system. Maximum compliance is achieved when the air pressure in the external ear is equal to the air pressure within the middle ear system. This measurement is accomplished by sealing the ear canal with a soft plastic ear tip attached to a probe assembly and varying the air pressure in the ear canal to measure the movement of the tympanic membrane. The results are displayed as a tracing called a tympanogram that plots compliance as a function of middle ear pressure. In addition, information about the peak pressure, maximum compliance, and equivalent ear canal volume (referred to simply as “volume” throughout this chapter and in subsequent chapters) are provided. A normal tympanogram will yield peak pressure, compliance, and volume measurements that fall within the normal ranges shown in Table 3–1.


There have been several classification systems with respect to tympanometry, but perhaps the most widely used is the Jerger classification system (Jerger, 1970) in which a number of tympanograms have been categorized based on their specific shapes and characteristics (Figure 3–3). Using this classical description scheme, clinicians have adopted the following universal tympanogram descriptors:




Type A: Type A curves demonstrate normal pressure, volume, and compliance values.


Type As: The “s” in As stands for “shallow.” These tracings typically demonstrate a normal middle ear pressure measurement and a compliance peak that is significantly reduced, but there continues to be some mobility or compliance in the system.


Type Ad: The “d” in Ad stands for “discontinuous” or “deep” and represents the exact opposite of the As tracing. The Ad tracing typically demonstrates normal pressure; however, there is a hypercompliant peak. This is typically associated with too much flaccidity of the tympanic membrane and can be secondary to disorders such as ossicular disarticulation.


Type B: Type B tympanograms can occur under several conditions. The tracings are sometimes referred to as “flat” because there is no observable compliance peak due to either a lack of movement of the tympanic membrane or a measurement error. An important measure to evaluate in these cases is the volume. If the tracing reflects a normal volume, there likely is middle ear involvement causing the lack of mobility. This is typically the result of effusion but may also be caused by other disorders such as cholesteatoma. However, if there is an abnormally small volume present, it may be that the probe tip has been occluded or blocked by cerumen, debris, or the ear canal wall. A tracing that reflects a large volume typically suggests a patent pressure equalization (PE) tube or a perforation of the tympanic membrane.


Type C: Type C tympanograms suggest some degree of Eustachian tube dysfunction. The tracing will demonstrate normal volume and compliance with peak compliance being noted at a negative middle ear pressure value.


In addition to peak pressure, compliance, and volume measurements, many clinicians also assess the tympanogram width or gradient as this can be another indicator of middle ear pathology. The tympanometric width is derived by measuring the width of the tympanogram (in daPa units) at a point that is half the peak admittance on the positive side of the tracing. Normative values for children range between 80 and 159 daPa (Margolis, Hunter, & Giebink, 1994) and for adults between 51 and 114 daPa (Margolis & Heller, 1987). Abnormally large tympanometric widths are indicative of middle ear dysfunction and can be observed in middle ear conditions such as otitis media and cholesteatoma; whereas, abnormally narrow tympanometric widths have been noted in cases of ossicular discontinuity and in some cases of ossicular fixation (Hunter & Shahnaz, 2014). Other methods have been introduced for measuring the tympanometric gradient that express the gradient as a ratio measure (see Nozza, 2003); however, the tympanometric width measurement described previously is more often employed as it is the tympanogram measure provided on many commercial immittance audiometers.


Acoustic Reflexes


Acoustic reflex testing is a direct measure of the acoustic stapedial reflex. It is defined as the lowest intensity level at which a reflex can be measured. Because the stapedial muscle contracts in response to loud sounds, a direct measure of the integrity of the stapedial reflex can be accomplished through this technique. As there are both ipsilateral and contralateral inputs within the brainstem, the stapedial reflex can be recorded with either ipsilateral or contralateral stimulation. Depending on the pattern of responses observed, varying inferences regarding the site of lesion can be made. In most normal hearing individuals, this reflex can be observed around 70 to 90 dB HL. Reflexes are elevated or absent when insufficient intensity is delivered to the cochlea secondary to compromise of the outer and/or middle ears. In individuals with cochlear hearing losses, the reflexes may be present at normal or elevated threshold levels, but they typically are reduced in terms of their sensation levels (SL). For patients with profound sensory (cochlear hearing losses), the reflexes are likely to be absent.


Elevated or absent acoustic reflexes can be indicative of retrocochlear involvement (see Wilson & Margolis, 1999). The auditory nerve, the auditory nuclei in the low pons (the cochlear nucleus and the superior olivary complex), and the facial nerve (both its nucleus and the nerve itself) must all be intact to provide a normal acoustic reflex. In the ipsilateral reflex, the neural response to the stimulus courses from the auditory nerve to the cochlear nucleus and superior olivary complex and then to the facial nerve nuclei and back down the facial nerve to the stapedius muscle ipsilaterally. A lesion anywhere along this route can affect the ipsilateral reflex. The contralateral reflex pathway is similar to the ipsilateral, but its course crosses midline in the low pons, ultimately connecting to the contralateral facial nerve nuclei and then the contralateral stapedius muscle. Therefore, the contralateral reflex can be affected by midline and contralateral brainstem lesions as well as by dysfunction of the contralateral facial nerve (see Chapter 2, “Structure and Function of the Auditory and Vestibular Systems”).


Measurement of acoustic reflex decay can also be utilized clinically. This typically requires a presentation of a 500-Hz and/or a 1000-Hz stimulus at 10 dB SL in reference to the pure-tone threshold over a 10-sec time period. If the amplitude of the response decreases more than half of the maximum over the 10-sec time period, it may indicate retrocochlear involvement affecting the eighth nerve or low brainstem (Wilson & Margolis, 1999). However, the sensitivity of the acoustic reflex decay test as an indicator of retrocochlear pathology is considered to be somewhat poorer than that of the auditory brainstem response (Feeney & Schairer, 2015). As a result, the latter procedure is more commonly used by audiologists to screen for retrocochlear lesions affecting the auditory nerve and/or the auditory nuclei located in the low brainstem.


Electroacoustic Measures


Electroacoustic procedures are tests that evaluate the acoustical responses of the auditory system. Otoacoustic emissions (OAEs) are a relatively recent discovery. They were initially described by David Kemp in the 1970s (Kemp, 1978), but did not become clinically integrated until the mid- to late 1990s. Traditionally, when one considers how hearing is evaluated, it is through our oldest measure of hearing, behavioral pure-tone audiometry. However, OAE assessments play a critical role with respect to differential diagnosis specifically in regard to the subspecialty of neuroaudiology. Both the electroacoustic as well as the electrophysiologic auditory assessments that are discussed later in the chapter can provide measurements of hearing sensitivity. In addition, OAEs have become a tool often used in the screening of newborns for hearing loss.


Electroacoustic evaluation as it relates to OAEs has significantly changed the face of diagnostic audiology. Otoacoustic emissions provide clinicians with objective information specifically regarding the integrity of the outer hair cells of the cochlea. They are unique in their ability to provide information related to the active biological process at this level of the auditory system, which is not readily available by any other means of assessment. This active process is a result of an acoustic “echo” caused by stimulation of the hair cells. That is to say, the recorded response is shaped by, and similar to, the eliciting acoustic signal. In order for this process to occur and emissions to be present, the outer and middle ear systems must be functioning normally for the stimulating signal to travel through to the cochlea and the emission to travel back through the middle ear to be recorded by a probe microphone placed in the ear canal. Although OAEs do not assess central auditory function, they can play an important role in differentiating between a cochlear and an eighth nerve or a central site of lesion. A sensorineural hearing loss of cochlear origin will typically result in abnormal OAEs, whereas OAEs will be normal if the eighth nerve and/or central auditory system is involved without any comorbid conductive and/or cochlear involvement.


In general, OAEs are categorized as either spontaneous or evoked. For the purposes of this book, we focus on the evoked otoacoustic emissions, which include both distortion product otoacoustic emissions (DPOAEs) and transient evoked otoacoustic emissions (TEOAEs) as they are the most widely utilized in clinical assessment. Both DPOAEs and TEOAEs are elicited by placing a probe in the ear of the patient. The probe tip is placed securely within the external auditory meatus and the stimulus is delivered either as a pair of tones (DPOAEs) or as a click stimulus (TEOAEs).


Distortion product otoacoustic emissions are elicited by a nonlinear process within the cochlea. This process occurs when two tones that are close in frequency are presented to the cochlea, and the mechanics of the inner ear create a distortion of the signals. In a healthy ear with normal hearing or near-normal hearing sensitivity, this response (i.e., the distortion product) is measured. The tones used to elicit this distortion are labeled as f1 and f2. The most robust response (the distortion product) is observed at or around a particular frequency that can be determined by applying the formula, fdp = 2f1f2. For example, if f1 was presented at 1000 Hz and f2 was presented at 1200 Hz, the frequency elicited would be 800 Hz. Although the most robust response occurs at the frequency that results from the application of this formula, other less robust (i.e., less intense) responses are also generated.


An important concept to understand is that measurement of OAEs requires forward and backward sound transmission. That is to say that a signal (eliciting stimulus) must reach the cochlea, and the response if generated by the cochlea must travel back to the outer ear. In order for responses to be successfully measured, the outer and middle ear must be free of debris and/or pathology that could affect this signal transmission. For example, if there is middle ear effusion present, the forward transmitting signal will be greatly reduced due to the fluid dynamic impeding the transmission of the eliciting stimulus. Even in cases of a healthy inner ear, this will likely result in a reduced or absent response at the level of the cochlea due to the reduction in the stimulus intensity before it even reaches the outer hair cells. If there is involvement or pathology of the outer or middle ear systems, often no valid interpretation can be made regarding cochlear integrity as the absence or low amplitude of the response can be due to the presence of the outer or middle ear pathology rather than true cochlear involvement.


Interpretation of DPOAEs is performed by measuring the level of the response (the emission) in reference to the noise floor or as an absolute value (i.e., the sound pressure level of the DPOAE). The noise floor is the measurable noise recorded in the ear canal. The DPOAE response is recorded as a function of its amplitude above the noise floor for the particular frequency(s) of interest. If a response meets clinical criterion for the level above the noise floor or the absolute response level (dB SPL), the response is considered to be present (Figure 3–4).


Transient otoacoustic emissions are another means by which cochlear function can be measured. TEOAEs are elicited through the use of a transient signal such as a click stimulus. The emission onset occurs approximately 4 msec following stimulation. The TEOAE is a broad-spectrum response; however, it does provide frequency information as the response follows the tonotopic arrangement of the basilar membrane. Similar to DPOAEs, the TEOAE is determined to be present or absent based on the relationship between the response and the noise floor. If the response is repeatable and exceeds the noise floor, then it is considered to be present. Like DPOAEs, TEOAEs are indicative of cochlear outer hair cell function; however, as was the case for DPOAEs, it may be impossible to document normal cochlear function if abnormal outer and/or middle ear pathology is present. Figure 3–5 demonstrates an example of a present TEOAE response.



Although not a direct measure of hearing sensitivity, OAEs provide general information regarding outer hair cell function within the cochlea. There are many applications for OAEs. These include, but are not limited to, infant screening, pediatric assessment, ototoxicity monitoring, and differential diagnosis of cochlear versus retrocochlear involvement. Patients who present with normal hearing sensitivity typically will demonstrate normal OAEs. The general clinical observation with respect to OAE interpretation is that in individuals with present OAEs, hearing is likely to be better than 30 dB HL (specifically for TEOAEs, however, DPOAEs may be noted in some cases with slightly more severe hearing loss). Therefore, those with absent OAEs (and normal outer and middle ear function) will likely demonstrate a sensorineural hearing loss with hearing thresholds greater than 30 to 40 dB HL. Finally, it is also interesting to note that in some cases, individuals who have a history of noise exposure may demonstrate abnormalities on OAE tests that are not evident on the audiogram (Attias, Horovitz, El-Hatib, & Nageris, 2001). In such cases, the early identification of noise damage to the auditory system may facilitate early intervention measures to prevent more significant hearing loss effects.



Otoacoustic emissions are invaluable with respect to their role in the differential diagnosis of cochlear versus retrocochlear involvement. Individuals who have either eighth nerve or central auditory involvement (in the absence of any comorbid cochlear or outer/middle ear conditions) will present with normal OAEs but will likely demonstrate abnormalities on pure-tone audiometry and ABR testing (in cases with eighth nerve involvement) or on behavioral and electrophysiologic measures (in patients with CANS compromise) (see Robinette & Glattke, 2002).


Given the fact that the degree of hearing sensitivity cannot be directly determined, otoacoustic emissions are certainly limited in their diagnostic utility. That is to say, in individuals who present with absent OAEs, all that can be concluded is that they have at least a mild to moderate degree of hearing loss. Therefore, OAEs are a useful screening measure, but lack strength with respect to threshold determination. This is why objective measures of hearing through electrophysiologic assessment can and should be a critical component of a diagnostic battery for many patients.


Auditory Processing Tests


Auditory processing evaluations seek to determine how efficiently and effectively patients are able to process complex auditory stimuli. The pure-tone audiogram is limited in that it only provides information about peripheral hearing sensitivity. However, many patients with significant lesions of the central auditory nervous system (CANS) demonstrate normal hearing sensitivity but are significantly impaired in the “processing” of auditory information. As the pure-tone audiogram usually is not helpful in documenting the auditory deficits experienced by these patients (see Musiek et al., 2017, for a review), additional behavioral and/or electrophysiologic testing will be needed to identify the patient’s auditory processing disorder. The tests described in the following sections can be administered to patients who report concerns regarding hearing (particularly in background noise) yet demonstrate normal peripheral hearing sensitivity in an effort to determine if the presence of a central auditory processing deficit is the basis for the patient’s auditory symptoms (Musiek & Chermak, 2014). In some patients who have mild to moderate hearing loss, central auditory testing can be completed, but the results need to be interpreted with extreme caution (see American Academy of Audiology, 2010, for discussion of the assessment of patients with peripheral hearing loss).


The assessment of central auditory processing generally takes on two forms: (1) behavioral assessment and (2) electrophysiologic assessment. The behavioral assessment seeks to provide information regarding a patient’s functional performance, whereas the electrophysiologic assessment provides information regarding the neural integrity of the CANS (see the following discussion).


Behavioral Tests


It is recommended that the evaluation of a patient’s central auditory processing disorder (CAPD) include a test battery consisting of tests of temporal processing, dichotic listening, monaural low redundancy speech perception, auditory discrimination tasks, and/or binaural interaction tests (American Academy of Audiology, 2010). Table 3–2 provides a listing of some of the clinically available tests within these areas of auditory processing, and Figure 3–6 provides an example of the form that will be used in this text to display central test results for adult patients.



Tests of temporal processing evaluate the ability of the auditory system to process small and rapid changes of sound over time (see Lister, Roberts, & Lister, 2011; Musiek et al., 2005). Although there are four subtypes of temporal processing (resolution, sequencing, integration, and masking), only the first two mentioned areas are commonly used in the assessment of patients being evaluated for central auditory processing deficits due to lack of available clinical measures for the latter two areas. Pattern perception tests (frequency and duration) assess among other things temporal sequencing ability. They require the ability to properly identify the sequence of rapidly occurring tones that vary in either frequency or duration. The Gaps-in-Noise (GIN) and Random Gap Detection tests are clinical measures of temporal resolution. These entail detection of a short interval of silence embedded in an acoustic stimulus, usually either white noise or a tone. The types of auditory processing tests discussed previously have been found to be sensitive to lesions of the CANS (Filippini, Wong, Schochat, & Musiek, 2019; Musiek et al., 2005; Musiek & Pinheiro, 1987).



The dichotic listening tests evaluate the binaural integration and separation abilities of the CANS and are sensitive to cortical lesions, corpus callosum compromise, and—to a lesser extent—brainstem involvement (Baran & Musiek, 1999; Musiek & Pinheiro, 1985). Binaural integration is evaluated by presenting the patient with different stimuli to each ear simultaneously and having the individual process and repeat what has been heard. This differs from binaural separation where the individual is asked to process and repeat stimuli presented to one ear while ignoring the stimuli presented to the other ear. Dichotic listening tasks are particularly useful in identifying cases of deficient interhemispheric transfer where marked left ear deficits are the hallmark finding.


The monaural low redundancy tests are among the least sensitive of the central measures, but they provide an ecological validity, which is beneficial to the test battery (Baran & Musiek, 1999). These tests are designed to degrade the auditory signal by filtering (filtered speech), increasing the rate (compressed speech), or placing the signal in competition (speech-in-noise or speech-in-speech competition).


Perhaps the most widely recognized binaural function test is the masking level difference (MLD) test. This measure, although not a direct assessment, provides insight into localization and lateralization abilities by creating a “release from masking” by changing the phase relationships at the two ears. This procedure has been shown to be highly sensitive to brainstem involvement (Lynn, Gilroy, Taylor, & Leiser, 1981). MLDs work best for low-frequency stimuli, such as 500-Hz tones and spondee words. Also gaining in popularity is the LiSN (Listening in Spatialized Noise) test which assesses spatial processing ability within a speech competition context. Target and competition speech signals are presented at four same or different locations in reference to each other, and the performance of the listener is recorded at each of these loci (see Cameron & Dillon, 2008). A more detailed discussion of the auditory processing tests discussed in this chapter can be found in Musiek and Chermak (2014).


Electrophysiologic Assessment


The use of electrophysiologic measures can be traced back to the early 1930s with routine clinical use beginning in the early 1980s (see Hall, 2007). Electrophysiologic assessment of the auditory system can be used for both the neurodiagnostic evaluation of the integrity of the auditory system as well as the evaluation of hearing sensitivity. This type of assessment is often employed to measure neurobioelectric activity arising from within the auditory nerve and/or the CANS. One of the electrophysiologic procedures discussed later in this chapter is limited in its assessment of these auditory structures as it primarily measures cochlear potentials (see discussion regarding electrocochleography). Electrophysiology has a long-standing history in clinical audiology spanning more than four decades and it continues to be an integral component of today’s diagnostic evaluation.


Electrophysiology (also referred to as evoked potentials [EPs]), like pure-tone audiometry, can be used to assess hearing sensitivity. However, these procedures also provide a mechanism for the objective measurement of neural integrity within the auditory system. Therefore, evoked potentials can be useful tools in the differential diagnosis of a variety of auditory disorders as they allow for measurement of not only the auditory nerve, but the entire CANS through the level of the cortex (i.e., if a combination of EPs are used). The following discussion will provide an overview of the early, middle, and late evoked potentials that are used in audiologic assessment.


Electrocochleography (ECochG) was the first of the auditory evoked potentials to be discovered in the 1930s (see Hall, 2007). Today, it has relatively widespread clinical use with respect to the evaluation of Ménière’s disease, as well as intraoperative monitoring. This potential is typically obtained by placing one of three types of electrodes (a canal electrode placed in the external auditory meatus, a tympanic membrane electrode placed on the eardrum, or a transtympanic electrode—a needle electrode placed on the promontory) in both the involved and uninvolved ear with a disk electrode placed at Fpz (ground). Using an alternating click stimulus, both a summating potential (SP; a direct current receptor potential reflecting cochlear electrical activity in response to acoustic stimulation) and an action potential (AP; a postsynaptic potential generated by the auditory nerve) are extracted (Figure 3–7). The ratio between these two potentials is calculated in order to determine if abnormalities are present. Abnormal results will vary depending upon the type of electrode utilized. Transtympanic electrodes yield less variance with a SP/AP ratio of greater than 30% falling outside the norm, whereas ear canal electrodes require a SP/AP ratio of 50% or greater to be considered abnormal by many investigators (see Ferraro, 2000, and Hall, 2007, for reviews). This test has proven to be useful in supporting the diagnosis of Ménière’s disease as patients with this disease typically present with abnormally large SP/AP ratios (Ferraro, 2000). It also can be used to assist in the identification of wave I of the auditory brainstem response (ABR) if this wave is not readily identifiable in the ABR waveform.


Stay updated, free articles. Join our Telegram channel

Oct 17, 2021 | Posted by in OTOLARYNGOLOGY | Comments Off on Audiologic, Vestibular, and Radiologic Procedures

Full access? Get Clinical Tree

Get Clinical Tree app for offline access