17 Effects of Noise and Hearing Conservation This chapter is concerned with the branch of audiology that deals with the effects and ramifications of excessive sound exposure. Terms like noise exposure and sound exposure will be used interchangeably in this context because we are dealing with issues involving too much sound. The topics that will be addressed here include the effects of noise on hearing ability and speech communication, its non-auditory effects, occupational noise exposure and industrial hearing conservation, and the related issue of workers’ compensation for noise-induced hearing losses. We will begin by expanding on several concepts about how noises are described and measured, which is often part of audiological practice and is also prerequisite to understanding the effects of noise exposure. We are already familiar with the concepts involved in describing the magnitude and spectral content of a sound in terms of its overall sound pressure level (SPL), octave-band levels, and weighted (A, B, C) sound levels. It is also important to characterize noise exposures in terms of their temporal characteristics (Hamernik & Hsueh 1991; Ward 1991). Noise exposures are considered continuous if they remain above the level of “effective quiet” (the level below which a noise is too soft to cause a threshold shift). Noises that are not continuous may be considered (1) interrupted if they are on for relatively long periods of time (hours) above the level of effective quiet; (2) intermittent if the noise exposure is split up by short breaks (e.g., a few seconds to an hour) of quiet or effective quiet; or (3) time-varying if the noise stays above effective quiet but its level varies over time. Transient noises produced by the sudden release of energy (e.g., gunshots) are called impulse noises, and often exceed 140 dB SPL. On the other hand, impact noises are transients caused by impacts between objects (e.g., hammering on metal) and are usually less than 140 dB SPL. Special sound level meters with extremely fast response characteristics are needed to measure transient noises accurately. Most noise events last for some period of time, from a few seconds to many hours. Also, the levels of the noise will usually vary during this period, often considerably. It is desirable to integrate these noise levels into a single number that summarizes the overall level of the exposure “averaged” over its duration, called its equivalent level (Leq). However, it is important to remember that decibels are logarithmic values, so that combining a series of sound level measurements into Leq cannot be done by using their simple arithmetic average. Instead, Leq is obtained using a method analogous to the one described in Chapter 1 for combining octave-band levels into overall SPL or dBA. The curious student will find the appropriate formulas and descriptions of their use in Berger, Royster, Royster, Driscol, and Layne (2000). In practice, Leq values are often obtained directly using an integrating sound level meter or a dosimeter (described below). Special types of Leq are used for compliance with occupational hearing conservation regulations (e.g., OSHA, 1983). The equivalent level for a 24-hour period of exposure is often expressed as Leq(24). When annoyance is an issue, the day-night level (DNL or Ldn) is used to describe the equivalent noise level for a 24-hour period in a way that assigns a penalty to nighttime noises because these are considered more objectionable. This is done by adding 10 dB to all noise levels that occur between the hours of 10 pm and 7 am. Community noise equivalent level (CNEL) is a variation of the DNL concept that adds a 5 dB penalty for evening noises and a 10 dB penalty for nighttime noises. All-day equivalent levels do not adequately represent objectionable single noise events like aircraft fly-overs, so a supplemental measurement is often necessary to characterize them. These single incidents can be expressed in terms of an equivalent 1-second exposure called sound exposure level (SEL). There are also other approaches to identify the interference or annoyance capability of noises. For example, the spectrum of a noise can be compared with noise criterion (NC) curves, including balanced noise criterion (NCB) curves, which also consider the effects of vibration and audible rattles (Beranek 1957, 1989). Noise surveys (or sound surveys) are systematic measurements of the sound levels in an industrial setting, or in other environments like the neighborhoods in the vicinity of an airport. Noise surveys are done for several reasons: 1. To identify locations where individuals (typically employees) are exposed to noise levels that exceed a certain criterion. This criterion is usually a time-weighted average (TWA) of ≥ 85 dBA because this value is the level specified by federal occupational hearing conservation regulations (OSHA 1983). The time-weighted average is a special kind of equivalent level normalized to an 8-hour workday. 2. To identify which employees are being exposed to noise levels of ≥ 85 dBA TWA or some other level of interest. 3. To ascertain which noise sources may require engineering controls, and to provide engineers with the information they need to determine what kinds of noise control methods are best suited to the problem at hand. 4. To determine how much attenuation must be provided by the hearing protection devices (e.g., earplugs and earmuffs) used by noise-exposed employees. 5. In addition, the information provided by noise surveys can also be used to identify and help alleviate problems pertaining to the audibility of emergency warning signals and the effectiveness of speech communication in the workplace. Levels in dBA are usually needed for determining employee noise exposures. Octave-band levels are needed to evaluate the nature of the noise in detail, for engineering purposes such as determining appropriate noise control techniques, and for assessing hearing protectors. Noise levels in dBC are also needed for assessing hearing protectors, especially when octave-band levels are not available. Environmental noise surveys often use other measurements to concentrate on annoyance factors around airports, major highways, and other non-occupational noise sources. The noise level measurements obtained in various locations can be used to develop noise level maps. Noise maps are diagrams that depict noise levels as contour lines superimposed on the floor plan of the area that was surveyed. The contours on a noise map show which areas have similar noise levels, just as contours on a geographical map show regions with similar elevations above sea level. An example of a noise level contour map is shown in Fig. 17.1. Maps of this type make it easier to identify noise sources and employees who are being exposed to unacceptable noise levels, and to communicate this information to others. In addition to area measurements, it is usually necessary to measure representative personal exposures (1) because noise levels often vary with location; (2) because many employees move around among work locations that have different noise levels, and/or in and out of noise exposure areas; and (3) to account for times when employees may be away from high-noise locations, such as lunch and rest breaks. Similar noise contour maps are made around airports and other environmental noise sources. The principal noise measurement tools are sound level meters and noise dosimeters. Recall from Chapter 1 that the sound level meter (SLM) is the basic instrument for measuring the magnitude of sound. The SLM can be used to measure overall sound pressure level at its linear setting; weighted sound levels in dBA, dBB, and dBC; as well as octave- and third-octave-band levels with the appropriate filters. The characteristics of the A-, B-, and C-weightings and the octave-band were summarized in Fig. 1.23 and Table 1.7 of Chapter 1. In this section we will expand on some of the fundamental aspects of sound measurement, and apply this information to noise measurement. Sound level meters are classified as types 0, 1, 2, and S (ANSI-S1.4 2006a). The most precise sound level meters with the most restricted tolerances are Type 0 or laboratory standard models. These instruments have the most exacting tolerances, which require them to be correct within ± 0.7 dB from 100 to 4000 Hz. Type 0 SLMs are used for exacting calibration and reference purposes under laboratory conditions, but are not required for noise level analyses. Type 1 or precision SLMs are intended for precise sound level measurements in the laboratory and in the field, and have tolerances of ± 1 dB from 50 to 4000 Hz. General-purpose or type 2 SLMs are field measurement instruments with tolerances as narrow as ± 1.5 dB from 100 Hz to 1250 Hz, and are the ones required to meet the noise level compliance standards of the Occupational Safety and Health Administration (OSHA) and other regulatory agencies. The student can get an appreciation for the tolerance differences among the three classes of SLMs from the samples listed in Table 17.1. One can also find “survey” (formerly type 3) SLMs, but they are not appropriate for our use. Special-purpose (type S) SLMs have a limited number of selected functions intended for a particular application rather than all of the features of the type 0, 1, and 2 meters. It takes a certain amount of time for a meter to respond to an abrupt increase in sound level. Response time is expressed as a time constant, which is how long it takes the meter to reach 63% of its eventual maximum reading. Sound level meters have “slow” and “fast” response speeds that date back to the early days of their development, and are specified for use in many noise measurements by various federal and state laws and regulations. In particular, OSHA requires most noise measurements to be made at the slow speed. The slow speed has a time constant of 1 second. This sluggish response has the effect of averaging out sound level fluctuations, making the meter easier to read; but it also makes the slow speed inappropriate for estimating sound level variability over time. With a time constant of 0.125 second, the fast speed is better suited for assessing the variability of a sound, but the fluctuating meter readings make it more difficult to determine an overall level. However, even the fast response is far too slow for measuring transient noises like impulses and impacts. A true peak or instantaneous response setting is needed to measure these kinds of sounds. At their true peak settings, type 0 SLMs must be able to respond to pulses as short as 50 micro seconds, and type 1 and 2 meters must be able to respond to 100-microsecond pulses. It is important to be aware that the impulse setting on SLMs, which has a 35-milli second rising time constant and a 1.5-second decay time, is not appropriate for measuring impulse sounds, in spite of its name. Noise dosimeters are sound measurement devices that are used to record the amount of sound exposure that accumulates over the course of an entire workday. An example is shown in Fig. 17.2. A dosimeter may be thought of as a sound level meter that has been modified to simplify and automate the process of taking noise exposure measurements over a period of time, making noise exposure monitoring a very practical activity. Personal noise exposure measurements are obtained by having an employee wear the dosimeter over the course of the work-day. In this case, the dosimeter is typically carried on the employee’s belt with its microphone located on top of his shoulder, or perhaps elsewhere on the body. The effect of where the microphone is worn depends on the nature of the acoustical situation, with small measurement differences in reverberant environments and large differences when noises are directional (Byrne & Reeves 2008). Area sound exposure measurements may be obtained by placing the dosimeter at an appropriate location in the work environment. Most dosimeters have a very large dynamic range, which means they are able to integrate a wide range of intensity levels into the exposure measurements. To comply with federal regulations (OSHA 1983), dosimeters usually include all sound levels ≥ 80 dB in the noise measurements, whereas those below this floor or lower threshold are often omitted. (Other floors are also available but are not appropriate for compliance with noise exposure regulations.) The overall, accumulated noise exposure for the workday is generally given in terms of a noise dose (analogous to a radiation dose) and/or the equivalent time-weighted average in dBA for an 8-hour exposure. In addition, modern dosimeters can make measurements in both dBA and dBC using all kinds of meter responses (slow, fast, instantaneous, etc.), providing various kinds of accumulated exposure levels, and providing a profile showing a detailed noise exposure history for the period that was monitored. Fig. 17.2 An example of a noise dosimeter commonly used for industrial noise exposure measurements. (Courtesy of Quest Technologies, Inc.) Spectral information is not provided by the overall noise level measurements obtained with the linear setting of an SLM, or with its A, B, and C weighting networks. However, a general idea about the spectral content of a noise can sometimes be obtained by comparing the sound levels measured with the linear and A-weighted settings of the sound level meter. This is possible because the overall SPL provided by the linear setting treats all frequencies equally, whereas the low frequencies are de-emphasized by the A-weighting network. In effect, a noise level reading in dB SPL includes the lows, whereas dBA “ignores” the lows. For this reason, a difference between the two levels implies that the noise contains low-frequency energy “seen” by the dB SPL reading but “ignored” by the dBA reading. On the other hand, similar levels in dB SPL and dBA imply that the noise does not have much energy in the low frequencies. This kind of comparison can also be made between noise levels in dBA and dBC because the C-weighting network largely approximates the linear response. Fig. 17.3 Octave-band levels of the noise in a hypothetical industrial environment with an overall noise level corresponding to 91 dB SPL and 90 dBA. Notice the noise has a low-frequency emphasis as well as the dominant 89 dB peak in the 2000 Hz octave-band. The spectrum of a noise can be determined by octave-band analysis, and an even finer picture of the spectrum of a noise from a third-octave-band analysis. These measurements can be done with a sound level meter that has octave- or third-octave filters, or with a self-contained instrument often called an octave-band analyzer (or third-octave-band analyzer). Fig. 17.3 shows an example of an octave-band analysis using the noise present in a hypothetical industrial environment. Notice that the amplitude of this noise is generally greater in the lower frequencies except for a pronounced peak in the 2000 Hz octave-band. This peak happens to be associated with the operation of a particular machine in the plant. These spectral characteristics cannot be ascertained from the overall noise level measurements, which were 90 dBA and 91 dB SPL. The dBA and dB SPL values are very close because of the dominance of the peak in the 2000 Hz octave-band. Another approach is to use a spectrum analyzer that determines the spectrum by performing a Fourier analysis on the sound electronically using a process called fast Fourier transformation (FFT). The effect of noise exposure on hearing sensitivity is expressed in terms of threshold shift. A temporary threshold shift (TTS) is a temporary change in hearing sensitivity that occurs following an exposure to high sound levels, and is a phenomenon that we have all experienced. The basic concept of a TTS is very simple. Suppose someone’s threshold is 5 dB HL, and she is exposed to a relatively intense noise for a period of time. Retesting her hearing after the noise stops reveals her threshold has changed (shifted) to 21 dB HL, and retesting a few hours later shows her threshold has eventually returned to 5 dB HL. Her threshold shift soon after the noise ended is 21 – 5 = 16 dB (the difference between the pre- and post-exposure thresholds), and it is a temporary threshold shift because her threshold eventually returned to its pre-exposure value. If her threshold never returned to its pre-exposure value, then it would have been a permanent threshold shift (PTS), also known as a noise-induced permanent threshold shift (NIPTS) or a noise-induced hearing loss (NIHL).1 Temporary and permanent threshold shifts have been studied in great detail and involve many factors (Miller 1974; Clark 1991a; Hamernik, Ahroon, & Hsueh 1991; Melnick 1991; Ward 1991; Kujawa & Liberman 2009). The nature of noise-induced damage to the auditory system was discussed in Chapter 6. In general, higher-frequency noise exposures cause larger threshold shifts than do equally intense lower-frequency exposures. In addition, the greatest amount of TTS tends to occur at a frequency that is about one half octave higher than the offending band of noise. However, not all sounds affect hearing sensitivity. The term equivalent quiet (or effective quiet) describes the intensity below which sound exposures do not cause a TTS. The weakest sound pressure levels that cause a TTS are ~ 75 to 80 dB for broadband noise, and decrease with frequency from 77 dB for the 250 Hz octave-band to 65 dB for the 4000 Hz octave-band. Thus, we are interested in the duration and intensity of exposures that are above equivalent quiet. In any case, it is stressed that noise-induced threshold shifts are very variable across individuals. 1 Although we are focusing on threshold shifts per se, one should be aware of findings in mice showing that noise exposure can cause permanent auditory neuron degeneration even when the TTS has been completely resolved (Kujawa & Liberman 2009). Fig. 17.4 Idealized curves showing the growth of TTS2 at some frequency due to increasing durations of exposure to a narrow-band noise at three different hypothetical levels above equivalent quite (80, 90, and 100 dB). Duration is shown on a logarithmic scale. A maximum amount of ATS (asymptotic threshold shift) is eventually reached for each noise level. Temporary threshold shift is usually measured 2 minutes after a noise is turned off because its value is unstable before that point, and it is thus called TTS2. The three functions in Fig. 17.4 illustrate how TTS2 is affected by the duration and intensity of a noise exposure. We will assume that all three exposures are above equivalent quiet, and that the TTS2 is being measured at the frequency most affected by our hypothetical band of noise. All three functions show that TTS2 increases as the duration of the exposure gets progressively longer. As we would expect, more TTS2 is produced by higher noise levels than by softer ones for any particular exposure duration. The rate at which TTS2 grows with duration is essentially linear when plotted on a logarithmic scale. However, TTS2 grows faster as the intensity of the noise increases, that is, the lines get steeper going from 80 dB to 90 dB to 100 dB. The linear increase continues for durations up to ~ 8 to 16 hours. The functions then level off to become asymptotic (shown by the horizontal line segments), indicating that TTS2 does not increase any more no matter how much longer the noise is kept on. In other words, the horizontal lines show the biggest TTS2 that a particular noise is capable of producing regardless of its duration. This maximum amount of TTS2 is called the asymptotic threshold shift (ATS). The threshold shift produced by a noise exposure is largest when it is first measured (i.e., TTS2), and then decreases with the passage of time since the noise ended. This process is called recovery and is illustrated in Fig. 17.5. Notice that the time scale is logarithmic, as it was in the prior figure. As a rule, complete linear recovery is expected to occur within ~ 16 hours after the noise ends, provided TTS2 is less than roughly 40 dB and it was caused by a continuous noise exposure lasting no more than 8 to 12 hours (shown by the lowest function in the figure). Otherwise, the recovery will be delayed (e.g., if the noise was intermittent, more than 8 to 12 hours in duration, and/or there is more TTS2). The middle curve in the figure is an example of delayed recovery that eventually becomes complete after 2 days. However, once the amount of TTS2 gets into the vicinity of ~ 40 dB, there is a likelihood that recovery may be incomplete as well as delayed. The top curve in the figure shows an example. Here, what started out as a 40 dB TTS ends up as a 15 dB PTS—an irreversible noise-induced hearing loss. Fig. 17.5 Idealized recovery from noise-induced threshold shift with time after the exposure has ended. Duration is on a logarithmic scale. Notice the largest threshold shift never recovered completely, leaving a permanent threshold shift. Even though the PTS is usually smaller than the original size of the TTS, it is possible for the PTS to be as large as the TTS. However, it cannot be any larger. Recall that TTS increases with the duration of a noise until the asymptotic threshold shift is reached. This means that there is a maximum amount of TTS (i.e., ATS) that can be caused by a particular noise at a given intensity, no matter how long it lasts. Consequently, the biggest PTS that can be caused by that noise is limited to the ATS. Keep in mind, however, that the TTS caused by a noise exposure is often superimposed upon a pre-existing sensorineural loss. For example, a TTS of 15 dB experienced by a patient who started out with a loss of 30 dB HL would bring his threshold to 45 dB HL. Occupational noise exposure The early use of terms like boilermakers’ disease to describe deafness among industrial workers highlights the fact that occupational noise exposure has been recognized as a major cause of hearing loss for a very long time. Industrial noises vary widely in intensity, spectrum, and time course. We can get a sense of the intensities involved from the partial list in Table 17.2, based on studies analyzed by Passchier-Vermeer (1974) and Rösler (1994). Overall, somewhere between 36 and 71% of manufacturing employees have daily workplace exposures of 90 dBA or more (Royster & Royster 1990b). Table 17.2 Some examples of various occupational noise level intensitiesa
Noise Levels and Their Measurement
Noise Level Surveys
Sound Level Meters
Dosimeters
The Noise Spectrum
Effects of Noise
Effects of Noise on Hearing
Noise-Induced Hearing Loss
Noise type | Example(s) of reported levels |
Miscellaneous industrial | 88–104 dB SPL; 80–103 dBA |
Weaving machines | 102 dB SPL |
Boiler shop | 91 dBA |
Riveting | 113 dB SPL |
Shipyard caulking | 111.5 dB SPL |
Drop forging |
|
Background | 109 dB SPL |
Drop hammer | 126.5 peak SPL |
Mining industry trucks | 102–120 dB SPL |
Mining equipment | 96–114 dB SPL |
Military weapons | 168–188 peak SPL |
aDerived from Passchier-Vermeer (1974) and Rösler (1994).
Numerous studies have reported the noise-induced hearing losses experienced by workers in many industries. Based on an analysis of eight well-documented studies, Passchier-Vermeer (1974) reported that (1) occupational NIHL at 4000 Hz increases rapidly for ~ 10 years, at which time it is similar to the amount of TTS associated with an 8-hour exposure to a similar noise, and (2) the continued development of NIPTS at 4000 Hz slows down or plateaus thereafter. On the other hand, NIHL at 2000 Hz develops slowly for the first 10 years, after which the loss at 2000 Hz increases progressively with time. The general pattern of development of occupational NIHL is illustrated in Fig. 17.6. Here, we see the median NIHLs of female workers who were exposed to noise levels of 99 to 102 dB SPL for durations of 1 to 52 years in the jute weaving industry (Taylor, Pearson, Mair, & Burns 1965). As a group, their hearing losses started as a 4000 Hz notch, and became deeper and wider as the workers were exposed to more years of noise. It is important to remember that these are median audiograms, and that Taylor et al (1965) reported a great deal of variability among individuals.
Rösler (1994) compared the long-term development of NIHL in 11 studies representing a wide variety of different kinds and levels of occupationally related noises. Fig. 17.7 summarizes the mean (or median) audiograms after (a) 5 to 10 and (b) ≥ 25 years of exposure, and shows presbycusis curves for comparison. Notice the similarity among the configurations after ≥ 25 years of exposure in spite of the widely variant kinds of noises. The “outliers” in Fig. 17.7b are the two groups of firearms users. Finnish regular army personnel developed their maximum average losses within the first 5 to 10 years, and are the least affected in Fig. 17.7b. On the other hand, Eskimos who hunted without hearing protectors at least weekly since childhood had thresholds that continued to deteriorate throughout a period of ~ 45 years, and are the most affected in Fig. 17.7b.
Fig. 17.6 Median audiograms of female weavers exposed to 99 to 102 dB SPL noises for 1 to 52 work years. (Based on Taylor, Pearson, Mair, and Burns [1965]).
Fig. 17.7 Mean (or median) audiograms after (a) 5 to 10 years and (b) 25 to 40+ years from 11 studies involving different kinds of occupational noises. For reference and comparison purposes, the open circles labeled “30 years” in a and “50 and 60 years” in b are due to age alone, based on ISO-1999 (1990). (Adapted from Rösler [1994], with permission of Scandinavian Audiology.)
Nonoccupational noise exposure Industrial noise exposures are not the only origin of NIHL. Hearing loss is also caused by environmental, recreational, and other nonoccupational noise exposures, and is sometimes called sociocusis. This is not surprising when we realize that many sounds encountered in daily life and leisure activities produce noise levels that approach and often far exceed 90 dBA. Some examples are shown in Fig. 17.8. Recreational impulse noises have been reported to reach peak SPLs of 160 to 170+ dB for rifles and shotguns (Odess 1972; Davis, Fortnum, Coles, Haggard, & Lutman 1985). A 0.22-caliber rifle generates a peak level of 137 or 140 dBA2 at the shooter’s ears (Rasmussen, Flamme, Stewart, Meinke, & Lankford 2009), and a toy cap pistol can produce ~ 140 dB (Suter 1992a). Rifles and shotguns can cause an asymmetrical loss that is poorer in the left ear, which usually faces the muzzle. Unfortunately, a majority of hunters do not wear hearing protectors or do so sporadically, even when using shotguns and high-power rifles (e.g., Stewart, Foley, Lehman, & Gerlach 2011).
Fig. 17.8 Examples of typical sources of environmental noise and their approximate maximum levels in dBA. (Based on data reported by the EPA [1974] and Clark and Bohne [1984].)
Music is probably the most common source of recreational sound exposure. Rock music averages 103.4 dBA (Clark 1991b), and some car stereo systems have been “clocked” at 140 dB SPL (Suter 1992a).3 However, personal listening devices (PLDs) or portable music/media players (PMPs) like iPods and other MP3 players as well as iPhones and other smart phones are the most ubiquitous sources of music-related sound exposure4 (for reviews, see, e.g., Fligor 2009; Punch, Elfenbein, & James 2011; Rawool 2012).
It is known that PLDs can generate sound levels that are considered to be capable of producing hearing loss (e.g., Fligor & Cox 2004; Portnuff & Fligor 2006; Keith, Michaud, & Chiu 2008), usually defined as an exposure to ≥ 85 dBA for 8 hours or its equivalent. Whether listeners actually do use their PLDs at harmful levels is a different question. Although the use of MP3 players and other PLDs varies among listeners and across studies, it appears that adolescents and young adults typically listen to their PLDs at less than harmful levels, although volume controls are often turned up when there is background noise and in various situations such as listening to favorite songs (e.g., Airo, Pekkarinen, & Olkinuora 1996; Williams 2005; Fligor & Ives 2006a,b; Ahmed, Fallah, Garrido, et al 2007; Torre 2008; Portnuff, Fligor, & Arehart 2009; Epstein, Marozeau, & Cleveland 2010; McNeill, Keith, Feder, Konkle, & Michaud 2010; Hoover & Krishnamurti 2010; Hoenig 2010; Keith, Michaud, Feder, et al 2011; Punch et al 2011; Danhauer, Johnson, Dunne, et al 2012).
2 The level depended on meter characteristics: 137 dB in fast mode and 140 dB in impulse mode.
3 Interestingly, Florentine, Hunter, Robinson, Ballou, & Buus (1998) found that ~ 9% of the subjects they studied had maladaptive patterns of listening to loud music suggestive of those associated with substance abuse.
Potentially damaging listening levels have also been reported (e.g., Ahmed, et al 2007; Farina 2008; Vogel, Verschuure, van der Ploeg, Brug, & Raat 2009; Snowden & Zapala 2010; Levey, Levey, & Fligor 2011). For example, Levey et al (2011) found that ~ 58% of the college students in their study used levels equivalent to > 85 dBA for an 8-hour exposure, and Snowden and Zapala (2010) found that the overall use of unsafe levels was 63% among the middle school students they studied. Of particular concern, Snowden and Zapala (2010) reported that many middle schoolers simultaneously listened to the same iPod in pairs, with each using one of the two earphones. This “sharing” practice encouraged them to turn up the volume to make up for the loss of binaural summation when listening monaurally as opposed to binaurally. As a result, the use of unsafe levels went up from 53% when listening with both earphones to 65% with each child listening to just one receiver.
The sound levels actually produced by MP3 players vary widely depending on the individual’s preferred listening level and other influences such as the nature of the listening environment (e.g., background noise) and the type of earphone being used (earbud, supra-aural, insert). Listeners turn up their volume controls to overcome background noise and distracting sounds, which of course increases the potential for harmful sound levels (e.g., Airo et al 1996; Ahmed et al 2007; Hodgetts, Rieger, & Szarko 2007; Atienza, Bhagwan, Kramer, et al 2008; Danhauer, Johnson, Byrd, et al 2009; Hoenig 2010; Danhauer et al 2012). In addition, the physical level of the sound produced at a given volume control setting is affected by the type of earphone (e.g., Fligor & Ives 2006a,b; Portnuff & Fligor 2006; Atienza et al 2008; Keith et al 2008; Rabinowitz 2010). Here is why. Recall that the same sound causes a higher pressure in a smaller volume than in a larger one. As a result, the more deeply placed insert receivers produce greater sound levels than the more shallowly placed receivers like “earbuds,” which, in turn, tend to produce higher levels than supra-aural earphones worn over the ears (e.g., Portnuff & Fligor 2006; Atienza et al 2008; Keith et al 2008). As a result, the potential for producing harmful sound levels is greatest for insert receivers. On the other hand, insert receivers attenuate outside noises better than earbuds and supra-aurals, so that insert users turn up their volume controls to overcome background noise less than those using other kinds of earphones (e.g., Atienza et al 2008; Hoenig 2010; Punch et al 2011).
The student has probably noticed that the discussion did not address the question of whether PLDs actually cause hearing loss. Perhaps the best available answer is provided by the following very carefully worded statement based on a systematic review by Punch et al (2011):
“The literature does not provide a consensus view regarding a causative relationship between PLD use and hearing loss, although many investigators have concluded that PLDs, especially as used by many teenagers and young adults, present a substantial risk of hearing loss or are a probable contributing factor to hearing loss over the lifespan” (Punch et al 2011, p. 70, emphasis added).
All things considered, it appears wise to limit PLD usage to reasonable levels and durations, as well as to provide hearing health information in schools and the media (e.g., Ahmed et al 2007; Fligor 2009; Punch et al 2011; Danhauer et al 2012). A review of the evidence to date led Fligor (2009) to suggest that listeners should restrict their PLD use to ≤ 80% of the maximum volume setting and ≤ 90 minutes per day. Since this limit applies to the standard equipment receivers that come with the PLD (typically earbuds), lower volume settings and shorter listening durations would apply when insert receivers are used. Interested students will find an informative set of suggestions for providing hearing health information and messages in the article by Punch and colleagues (2011).
In light of this discussion, one might expect that the prevalence of hearing loss would be getting worse over time. There have been several pessimistic reports, but the most troubling findings were reported by Shargorodsky, Curhan, Curhan, and Eavey (2010; see also Chapter 13). They compared the hearing loss statistics in the 2005–2006 National Health and Nutrition Examination Survey (NHANES)5 to those in the 1988-1994 survey. According to their analysis, the prevalence of hearing loss among American 12- to 19-year-olds rose from 14.9 to 19.5% overall and from 12.8 to 15.4% for the high frequencies between the two surveys.
However, subsequent analyses of the same NHANES data showed that the prevalence of high-frequency hearing loss among adolescents actually did not increase between 1988–1994 and 2005–2006 (Henderson, Testa, & Hartnick 2011; Schlauch & Carney 2011, 2012; Schlauch 2013). The more optimistic outcomes were revealed by controlling for confounding variables, such as any signs of conductive disorders, threshold shifts affecting both the low and high frequencies as opposed to just the highs, and important calibration and reliability issues. After analyzing the 2005–2006 survey data with these rigorous controls, Schlauch and Carney (2012) showed that the prevalence of high-frequency hearing loss among adolescents actually approximated only 7.4 to 7.9%. Other findings also suggested that the prevalence of high-frequency hearing loss has not been increasing among adolescents and young adults, at least to date (Augustsson & Engstrand 2006; Rabinowitz, Slade, Galusha, Dixon-Ernst, & Cullen 2006). Moreover, the overall hearing loss among American adults may well be improving over time, as well. For example, median adult thresholds improved between 1959–1962 and 1999–2004 (Hoffman, Dobie, Ko, Themann, & Murphy 2010); and the prevalence of hearing loss ≥ 25 dB HL dropped from 27.9 to 19.1% between 1971–1973 and 1999–2004 for those without diabetes (Cheng et al 2009).
5 These surveys are available from the Centers for Disease Control and Prevention at http://www.cdc.gov/nchs/nhanes.htm.
The reasons for these improving trends are not completely clear, but, if real, they probably reflect several potential influences (see, e.g., Cruickshanks, Nondahl, Tweed, et al 2010; Hoffman et al 2010; Punch et al 2011; Zhan, Cruickshanks, Klein, et al 2011; Schlauch & Carney 2012; Schlauch 2013). Some of the possibilities might include benefits derived from improved public awareness of noise effects, as well as overall health improvements related to factors like nutrition, supplements, health habits, and medical conditions and care. It is clear, however, that we must not become complacent about the effects of noise exposure, especially in light of ever-increasing opportunities for occupational, environmental, and recreational noise exposure for almost all age groups.
Individual susceptibility and interactions with other factors Individual susceptibility to NIHL varies widely and is affected by several factors (Boettcher, Henderson, Gratton, Danielson, & Byrne 1987; Henderson, Subramaniam, & Boettcher 1993). At first glance, it would seem wise to use temporary threshold shifts as the test for susceptibility to NIHL because they precede permanent threshold shifts, and the amount of TTS after an 8-hour work-day seems to be similar to the amount of PTS after 10 years of occupational exposure on a group basis. However, the correlation between TTS and PTS is too ambiguous for TTS results to be used as a test of one’s susceptibility to NIHL, especially when intermittent exposures and impulse noises are involved. The use of TTS testing is also unattractive because of litigation fears.
The effect of noise exposure is exacerbated by vibration, but vibration alone does not produce a hearing loss. Susceptibility is also affected by the effectiveness of the acoustic reflex and possibly by the efferent auditory system. Susceptibility is also affected by noise exposure history: repeated intermittent noise exposures produced progressively smaller hearing losses in laboratory animals over time (Clark, Bohne, & Boettcher 1987; Subramaniam, Campo, & Henderson 1991a,b). Does a prior noise exposure affect susceptibility to future exposures? This question has been addressed by measuring threshold shifts in laboratory animals that were subjected to (1) a prior (initial) noise exposure (2) followed by a recovery period, and (3) then subjected to a subsequent noise exposure (Canlon, Borg, & Flock 1988; Campo, Subramaniam, & Henderson 1991; Subramaniam, Henderson, Campo, & Spongr 1992). Subsequent low-frequency exposures produced smaller threshold shifts after a low-frequency prior exposure; but susceptibility to a subsequent high-frequency noise was made worse by prior exposures (with low- or high-frequency noises).
Susceptibility to NIHL is increased by nonauditory factors as well, including ototoxic drugs (aminoglycosides and cis-platinum, possibly and less so for salicylates) and industrial and environmental toxins (e.g., solvents, carbon monoxide). Other factors have been identified, but they account for a very small part of the variability, often with inconclusive or equivocal results. These include the following, with the more susceptible groups identified in parentheses: age (very young and the elderly), gender (males), eye color (blue), and smoking (smokers).
Damage Risk Criteria and Noise Exposure Standards
There are probably two closely related key questions that concern people about noise exposure and hearing loss: First, what are the chances of getting a hearing loss from being exposed to some noise for some period of time? Second, how much noise exposure is acceptable before it becomes damaging to hearing? These questions are addressed by damage risk criteria (DRC), which are standards or guidelines that pertain to the hazards of noise exposure. However, this apparently straightforward issue actually involves many interrelated questions, the answers to which are complicated, not necessarily understood, and often controversial. Some questions deal with the noise and predicting its effects: How should noise be quantified for the purpose of assessing damage risk, and how should we handle different kinds of noises (continuous, impulsive, intermittent, and time varying)? Are there valid and reliable “predictors” of NIHL, and if so, what are they and how are they related to hearing loss? Other questions deal with the amount of NIHL and how many people are affected: How much hearing loss is acceptable or at least tolerable, and at which frequency(ies)? Since people vary in susceptibility, what is the acceptable/tolerable percentage of the population that should be “allowed” to develop more than the acceptable/tolerable amount of NIHL? Still other questions pertain to distinguishing between different sources of hearing impairment: Can we separate NIPTS from other sources of hearing impairment (e.g., pure presbycusis and disease), and if so, how? Can we distinguish between the effects of occupational noise exposure and nonoccupational causes of hearing loss, including nonoccupational noise exposures? The latter is an important issue because we are usually concerned with industrial exposures. For this reason, we often use the term industrial noise-induced permanent threshold shift to refer to the part of a hearing loss actually attributable to industrial (or other occupational) noise exposures. These questions should be kept in mind when dealing with the effects of noise exposure in general, and particularly when dealing with hearing conservation programs and assessing hearing handicap for compensation purposes.
One approach to damage risk criteria is to protect just about everybody from just about any degree of NIHL. This notion is illustrated by the “Levels Document” issued by the Environmental Protection Agency (EPA, 1974), which suggested that NIPTS could be limited to less than 5 dB at 4000 Hz in 96% of the population by limiting noise exposures to 75 dB Leq (8) or 70 dB Leq (24) over a 40-year period. Changing the criterion to 77 dB Leq (24) would protect 50% of the population.
The Committee on Hearing and Bioacoustics and Biomechanics (CHABA) published damage risk criteria intended to limit the amount of industrial NIPTS to 10 dB at ≤ 1000 Hz, 15 dB at 2000 Hz, and 20 dB at ≥ 3000 Hz among 50% of workers who are exposed to steady or intermittent noises for 10 years (Kryter, Ward, Miller, & Eldredge 1966). The CHABA DRC was based on the amount of TTS2 that occurs after an 8-hour noise exposure, and relied on the notion that this value seems to correspond to the amount of NIPTS that is present after 10 years of occupational exposure. However, it has never been proven that TTS validly predicts NIPTS, and this notion has several serious problems (e.g., Melnick 1991; Ward 1991). Damage risk criteria for impulsive noises using similar criteria were introduced by Coles, Garinther, Hodge, and Rice (1968). The CHABA damage risk criteria required noises to be measured in third-octave or octave-bands, and expressed maximum exposure levels for durations up to 8 hours per day. The maximum allowable octave-band levels for an 8-hour exposure were ~ 85 dB for frequencies ≥ 1000 Hz but were higher for lower frequencies, because they cause smaller threshold shifts than do equally intense higher-frequency exposures. The allowable noise levels also became higher as the duration of the exposure decreased below 8 hours, because the amount of threshold shift is related to exposure duration. Botsford (1967) developed equivalent values that make it possible to apply the CHABA damage risk criteria to noises measured in dBA instead of octave-band levels.
Several well-known approaches and estimates of the risks for developing a hearing loss due to occupational noise exposure have been developed over the years (e.g., NIOSH 1972, 1998; EPA 1973; ISO 2013, ANSI 2006b; Prince, Stayner, Smith, & Gilbert 1997). The NIOSH (1972) risk criteria were based on linear fit to the data collected in the 1968–1972 NIOSH Occupational Noise and Hearing Survey (Lempert & Henderson 1973), which was reanalyzed using a best-fitting nonlinear model by Prince et al (1997), also known as the NIOSH-1997 model. [The 1968–1972 survey provides valuable data that are still used today because it pre-dated regulations requiring the use of hearing protectors (see below), the use of which makes it hard to determine actual exposure levels from modern noise surveys.] The criteria for material impairment used by Prince et al (1997) involved an average hearing loss in both ears of more than 25 dB for certain combinations of frequencies: (a) 500, 1000, and 2000 Hz; (b) 1000, 2000, and 3000 Hz; and (c) 1000, 2000, 3000, and 4000 Hz. The latter is similar to the criterion recommended by ASHA (1981), discussed later in this chapter.6 In addition, the National Institute for Occupational Safety and Health (NIOSH, 1998) has adopted a binaural average loss exceeding 25 dB HL for the 1000 to 4000 Hz average as its criterion for material hearing impairment.
The International Standards Organization ISO-1999 standard (2013) considers the hearing loss of noise-exposed people to be due to the combination of an age-related component and NIPTS, which are almost additive.7 Formulas are used to predict the effects of noise exposure levels (from 85 to 100 dBA) and durations (up to 40 years) at each frequency for different percentiles of the population (e.g., the 10% with the most loss, the 10% with the least loss, medians, etc.). This approach has been adopted in the ANSI S3.44 standard (2006b). In effect, NIPTS is determined by comparing the thresholds of a noise-exposed group to a comparable unexposed group of the same age. These standards provide two sets of age-related hearing level distributions. One is based on a population highly screened to be free of ear disease and noise exposure (representing more or less “pure” presbycusis), and the other is from an essentially unscreened general population.
To estimate the risks to hearing attributable to a certain amount of occupational noise exposure, we need to compare the percentage of people exposed to that noise who get material amounts of hearing impairment to the percentage of unexposed people of the same age who also get a material hearing impairment. Obviously, the risk estimate will depend on the criterion used to determine when a material hearing impairment begins. In other words, excess risk for occupational hearing impairment is simply the percentage of exposed people with material impairments minus the percentage of unexposed people with material impairments. Fig. 17.9 shows the amount of excess risk of hearing impairment due to 40 years of occupational noise exposures of 80, 85, and 90 dBA (8-hour TWA) estimated by several of the methods just described. It is clear that excess risk becomes readily apparent by the time occupational noise exposure levels reach 85 dBA even though the actual percentages estimated by the various methods and pure tone combinations differ. Notice, too, the higher excessive risk estimates for NIOSH than for ISO. The higher excess risk estimates for NIOSH may be due to differences at the lower frequencies that were not due to noise between the exposed and control groups (Dobie 2007). Clearly, it is important to remember that the estimated risk of NIHL is affected by which reference population is used (see, e.g., Prince et al 2003).
6 Prince et al (1997) weighted the 1000–4000 Hz average based on the articulation index (see below) as opposed to ASHA’s (1981) simple 1000–4000 Hz average; however, the two variations yielded similar risk estimate outcomes.
OSHA noise exposure criteria Noise exposure standards are particularly effectual when they carry the weight of law. Numerous examples are found in state and local ordinances and in military regulations, but the most influential are federal labor regulations found in the Walsh-Healey noise standard and OSHA Hearing Conservation Amendment (HCA) (DOL 1969; OSHA 1983). These noise exposure limits are shown in Table 17.3 (first and second columns), where we see that the maximum noise exposure limit is 90 dBA for 8 hours. In addition, impulse or impact noises are not supposed to exceed 140 dB peak SPL. If the noise level exceeds 90 dBA, then the exposure duration must be reduced by one half for each 5 dB increase. In other words, the maximum exposures are 8 hours at 90 dBA, 4 hours at 95 dBA, 2 hours at 100 dBA, down to one-quarter hour at 115 dBA. However, the noise level is not permitted to exceed 115 dBA even for durations shorter than one-quarter hour. This trade-off of 5 dB per doubling of time is called the 5 dB trading rule or exchange rate, and is based on the premise that sounds that produce equal amounts of TTS are equally hazardous. This is the same equal-TTS principle discussed previously for the CHABA damage risk criteria. The major alternative approach is to reduce the intensity by 3 dB for each doubling of duration (the 3 dB trading rule or exchange rate), which is employed by the military, the EPA, and many foreign countries. The 3 dB trading rule is based on the equal-energy concept that considers equal amounts of noise energy to be equally hazardous, and is more strongly supported by scientific evidence than the 5 dB rule (e.g., Suter 1992b; NIOSH 1998).
Fig. 17.9 Excess risk of material hearing impairment (binaural average > 25 dB HL) using various approaches and pure tone averages expected to result from 40 years of occupational noise exposures of 80, 85, and 90 dBA (8 hours TWA) among 60-year-old workers. The approach labeled “NIOSH-1997” refers to Prince et al (1997). (Based on NIOSH [1998].)
The maximum allowable noise exposure is known as the permissible exposure level (PEL) and is considered to be one full dose of noise. Using the OSHA (1983) criteria, a person has received one dose (or a 100% dose) of noise regardless of whether he was exposed to 90 dBA for 8 hours or 105 dBA for 1 hour. This is the same kind of terminology that is used for exposures to radiation or noxious chemicals: a full day’s dose is a full dose whether it accumulates over 8 hours or just a few minutes. Because of the 90 dB PEL and the 5 dB trading rule, an 8-hour exposure to 85 dBA would be a half dose (50% dose) and an 8-hour exposure to 80 dBA would be a quarter dose (25% dose). Similarly, an 8-hour exposure to 95 dBA would be a 200% dose or two doses. Hence, we can also express a noise exposure in terms of an equivalent 8-hour exposure, that is, its 8-hour TWA. Equivalent values of noise level exposures in terms of TWA and dosage are shown in Fig. 17.10.8 Using the plot labeled “OSHA (1983),” we see that the following are examples of equivalent noise exposures:
90 dBA TWA and 1 dose and a 100% dose (the PEL)
95 dBA TWA and 2 doses and a 200% dose
100 dBA TWA and 4 doses and a 400% dose
105 dBA TWA and 8 doses and an 800% dose
120 dBA TWA and 70 doses and a 6000% dose
8 For reference purposes, the relationship between TWA in dBA and noise dose in percent (D) can be calculated as follows: TWA = 16.61 × log(D/100) + 90 for OSHA (1983) purposes, and TWA = 10 × log(D/100) + 85 according to the NIOSH (1998) criteria..
Table 17.3 Maximum permissible noise exposures according to the OSHA (1983) regulations and NIOSH (1998) recommendations
Maximum exposure level in dBA | Maximum exposure duration | |
NIOSH (1998) recommendations | ||
85b |
| 8 hours |
88 |
| 4 hours |
90c | 8 hours | 2 hours 31 minute |
92 | 6 hours | 1 hour 35 minutes |
95 | 4 hours | 47 minutes 37 seconds |
97 | 3 hours | 30 minutes |
100 | 2 hours | 15 minutes |
102 | 1 hour 30 minutes | 9 minutes 27 seconds |
105 | 1 hour | 4 minutes 43 seconds |
110 | 30 minutes | 1 minute 29 seconds |
115 | 15 minutes or less | 28 seconds |
aFrom OSHA (1983), Table G-16, which also indicates: “When the daily noise exposure is composed of two or more periods of noise exposure of different levels, their combined effect should be considered, rather than the individual effect of each. If the sum of the following fractions: C1/T1 + C2/T2 + … + Cn/Tn exceeds unity [i.e., 1], then the mixed exposure should be considered to exceed the limit value. Cn indicates the total time of exposure at a specified noise level, and Tn indicates the total time of exposure permitted at that level.”
bOSHA (1983) PEL for an 8-hour TWA exposure.
cNIOSH (1998) REL for an 8-hour TWA exposure.
Fig. 17.10 Noise exposure in terms 8-hour TWA in dBA and noise dose (shown in percent on the left axis and as the number of doses on the right axis) according to OSHA (1983) regulations compared with recommendations by NIOSH (1998; see text).