80 Pure-Tone Audiometry
Pure-tone audiometry is the cornerstone of clinical auditory assessment. It is a psychoacoustical test which aims to establish the subject’s pure-tone hearing threshold levels at specific frequencies, that is, the minimum sound level at which a specific response can be obtained. It is used for the diagnosis, remedial action and rehabilitation of hearing loss.
The ear responds equally, not to equal increments, but to equal multiples of sound intensity. In other words, intensity is exponentially related to loudness perception. Therefore, a logarithmic scale to measure loudness is necessary. The bel is the log to the base 10 of the ratio of the sound intensity being measured to a reference intensity which is constant and is measured in W/m2. The decibel is 10 times this ratio. Therefore,
Sound intensity (Ix) in dB = 10log10 Ix/Io
where Io = 10-12 W/m2
It is more usual to measure sound pressure rather than intensity, and since sound intensity is proportional to the square of sound pressure then:
Sound intensity in dB = 10log10 sound pressuren2/sound pressure02 which can be converted to:
20log10 sound pressuren/sound pressure0
Log10 2 is about 0.3, so doubling sound intensity corresponds to a 3 dB increase. Each 10-dB increase represents a 10-fold increase in the intensity of the sound (log1010 = 1), a 3.3-fold increase in sound pressure, but to the ear, the perception of doubling the loudness.
80.2 Decibel Scales
1. Sound pressure level (dB SPL) In terms of sound pressure, the threshold of hearing corresponds (approximately) to a sound pressure level (dB SPL) of 20 × 10–6 pascals and the threshold of pain to a level of 200 pascals. The auditory system is less efficient at detecting sounds at the upper and lower ends of the frequency spectrum than in the middle regions. The detection of sounds in decibels of sound pressure level (dB SPL) produces a pure-tone audiogram which in normal circumstances would not be flat, but dome shaped. It was considered that the use of a dB SPL scale in pure-tone audiometry would make abnormalities difficult to identify.
2. Hearing level (dB HL or dB ISO) A decibel scale of human hearing was designed so that 0-dB hearing level (HL) would be the expected threshold of detection of a pure tone irrespective of its frequency. The amount of energy at 0 dB HL at each frequency is not the same. It is measured in relative terms (dB ISO), where the reference zero is an internationally agreed standard. This standard represents the thresholds at each test frequency for a group of presumed otologically normal young adults. In the dB HL scale, normal hearing individuals would be expected to have a flat audiogram, the mean level being 0 dB HL. The clinical audiogram therefore gives an estimate of the subject’s hearing relative to normal.
3. The A-weighted scale (dB A) The ear is not equally sensitive to sounds of different frequencies. It is particularly sensitive to sounds in the ‘speech frequencies’ (500–4,000 Hz) and progressively less so to sounds of lower and higher frequencies. In addition, it appears that the ear is less easily damaged by the sound frequencies to which it is less sensitive. To take account of this an ‘A-weighting’ is used, which reduces the contribution of very low and very high frequencies to the overall noise level measurement. This dB A scale is used in industrial and other noise exposure settings.
Pure tones at several different frequencies are tested, usually 250, 500, 1,000, 2,000, 4,000 and 8,000 Hz for air conduction (although 3,000 and 6,000 Hz will be required for noise-induced hearing loss claims and can offer useful diagnostic indicators in routine clinics) and 500, 1,000, 2,000 and 4,000 Hz for bone conduction. The results of bone conduction become less reliable at and above 4,000 Hz, and at 250 Hz are often not representative as they may be felt rather than heard. Bone conduction is taken to give an indication of cochlea function, but because a variety of routes for the transmission of sound to the cochlea exist for bone conduction, it is not an absolute representation of inner ear threshold. When the skull is set in vibration by a bone conduction vibrator, the sounds reach the inner ear by the direct osseous route or via transmission across the middle ear. This causes an artificial depression of the bone conduction thresholds whenever a conductive defect is present. If the middle ear defect is corrected, then the bone conduction thresholds will appear to improve because of the addition of the middle ear component. This has become known as the ‘Carhart effect’ after he described it in patients who had successful fenestration surgery for otosclerosis.
The subject is seated in a soundproof room and the procedure is carefully explained by the examiner. Earphones are used for air conduction (or if the canals are prone to collapse, insert earphones), and the subject is asked to signal by pressing a small handheld button as soon as the tone is heard and keep the button pressed for as long as the sound is heard (this enables the tester to check the validity of the responses). Pure tones are produced by a calibrated audiometer (daily subjective listening tests should be performed by the tester and the maximum interval between objective calibration should not exceed 12 months) and are first presented to the subject’s perceived better ear.
Thresholds are ascertained using a psychophysical method of limits and results are plotted on an audio-gram form (Fig. 80.1). Tones are first presented at an intensity above the patient’s estimated threshold. The intensity is reduced in 10-dB steps until no sound is heard. The signal is then increased in 5-dB steps until half of the tone presentations are consistently heard. This continues in the following order: 1,000, 2,000, 4,000, 8,000, 500 and 250 Hz. Finally, 1,000 Hz is tested again to check on subject accuracy and should be within 0 to 10 dB of the initial result. The best result is plotted on the audiogram. Any gross differences require retesting. The second ear is then tested in identical fashion except for the 1-kHz retest. The timing and duration of signal presentation and gaps between the signals should be varied, from 1 to 3 seconds, to avoid the patient pre-empting the stimulus and giving false-positive responses. No visual clues should be offered.