<![CDATA[JayWink Solutions - Blog]]>Thu, 22 Feb 2024 06:49:27 -0500Weebly<![CDATA[World Hearing Day]]>Wed, 21 Feb 2024 08:00:00 GMThttp://jaywinksolutions.com/thethirddegree/world-hearing-day     Every year, on March 3, the World Health Organization (WHO) partners with healthcare and community organizations to observe World Hearing Day.  Each year, events are held around the world, with a common theme, to promote ear and hearing health and broaden awareness of related issues.
     The title of the 2024 program is “Changing Mindsets” and the unifying theme for this year’s events is “Let’s make ear and hearing care a reality for all!”  As much a rallying cry as a theme, World Hearing Day organizers strive to eliminate the stigma often associated with hearing issues and to expand global access to information, monitoring, and treatment.
     World Hearing Day 2024 partners and advocates are encouraged to develop programs and host events that further the following objectives:
  • Counter the common misperceptions and stigmatizing mindsets related to ear and hearing problems in communities and among health care providers.”
  • Provide accurate and evidence-based information to change public perceptions of hearing loss.”
  • Call on countries and civil society to address misperceptions and stigmatizing mindsets related to hearing loss, as a crucial step towards ensuring equitable access to ear and hearing care.”

     To grasp the enormity of the issues of hearing loss and ear health and their global impact, consider the following (from WHO):
  • Globally, unaddressed hearing loss costs ~$980B (USD) annually.  Of that, more than $310B is health-related cost, and over $180B is attributed to productivity losses.
  • In 2019, 1.5 billion people (~1 in 5) suffered from hearing loss; nearly 430 million of these cases were moderate or worse.  The total number is predicted to rise to 2.5 billion (~1 in 4) by 2050.
  • In the Americas Region alone, 217 million people have existing hearing loss, and another 196 million people in the European Region.
  • Worldwide, more than 80% of hearing and ear care needs are unmet.
  • Recreational sound places 50% of 12 – 35-year-olds at risk for hearing loss.
  • Approximately 16% of hearing loss in adults is attributed to occupational noise exposure.
     World Hearing Day events and promotions may be generalized to present the broadest array of information to the widest audience possible.  Many default to addressing noise-induced hearing loss (NIHL) in adults or presbycusis, simply because these are the most-salient issues.  However, there are several other issues that span a person’s “life course.”  Initiatives related to those issues, whether or not they become the focus of a special event, are worthy of note; these include:
  • Immunization programs to prevent infections that can effect hearing and ear health.
  • Education and strategy development to manage exposures to ototoxic chemicals and medications.
  • Hearing aid fitment education and training.
  • Establishing “safe listening” practices for personal electronic devices and recreational activities.
  • Design guidance for entertainment venues to enhance “safe listening.”
  • Promoting lifelong hearing monitoring and protection, from infancy onward.
  • Defining the People-Centered Care public health strategy, summarized by the H.E.A.R.I.N.G. acronym:
     Events planned in the USA in observance of World Hearing Day 2024 include:

Related Websites
World Hearing Day – the official website
World Health Organization (WHO)
CDC/NIOSH – US Affiliate
World Hearing Day – Wikipedia (includes information on past observances)
Hearing Cooperative Research Center
Dangerous Decibels
Hearing Industries Association

     The upcoming World Hearing Day serves to reinforce the notion that all causes of hearing loss and all stages of life require attention.  Though “The Third Degree” focuses on aspects of commercial enterprises, it is also a welcome opportunity to remind readers that our interests, our lives, and our impacts are not limited to professional endeavors.  Taking a holistic approach – to any health topic – can improve the quality, and perhaps the quantity, of life for ourselves and others.

     For additional guidance or assistance with Safety and Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

References
[Link] “Integrated People-Centred Ear and Hearing Care:  Policy Brief.”  World Health Organization; 2021.
[Link] “World Report on Hearing.”  World Health Organization; 2021.
[Link] “Safe listening devices and systems: a WHO-ITU standard.”  World Health Organization and International Telecommunication Union; 2019.
[Link] “WHO global standard for safe listening venues and events.”  World Health Organization; 2022.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 9:  Concepts in Communication]]>Wed, 07 Feb 2024 08:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-9-concepts-in-communication     One of the most important aspects of soundscape management is the maintenance of communication capabilities.  Achieving stable communications is particularly challenging, as communication both contributes to and competes with the soundscape in which it takes place.  Types of communication necessary may include verbal and nonverbal, two-way or broadcast, face-to-face or remote, emergency and routine.
      Effective communication requires that a message’s content, delivery mechanism, sound characteristics, and receiver are compatible.  To design an effective communication system, due consideration must be given to the sender (e.g. speaker), receiver (listener), and everything in between.
     This installment of the “Occupational Soundscapes” series explores characteristics of and interactions among ambient sound, messages or signals, and auditory capabilities to provide the conceptual background needed to establish communication system requirements.  There is an emphasis on speech communication, given its prevalence and challenges in workplaces.
Auditory Fitness for Duty
     Auditory Fitness for Duty (AFFD) standards are often associated with military service and related occupations.  It is an assessment of one’s auditory capabilities with respect to safe and efficient performance of one’s duties.  AFFD is being supplanted in military applications, but continues to serve as a cautionary tale.
     Even without details of the test and scoring procedures, serious flaws can be seen in the AFFD recommendation chart, shown in in Exhibit 1.  Examples include:
  • With a score of 11, the recommendation is based exclusively on length of service.  An individual could be discharged despite identical performance to another retained without restriction.
  • With a score as low as 4, an 18-year veteran is recommended for retention, while a 17-year veteran is recommended for reassignment.  With less than 15 years of service, one’s career is in jeopardy.
  • With a “perfect” score (13), only 8+-year veterans are recommended for retention.
     Experience tends to improve test scores.  Some improvement can be attributed to genuine increases in task performance via learning curves.  However, experience with a test procedure can also inflate scores in some cases.  In this scenario, true performance of the less-experienced test-taker is actually higher.  Unless length of service can be shown to improve task performance beyond that demonstrated by the test, it should not be a factor in the recommendation.  Overreliance on arbitrary criteria can lead to bizarre and dangerous results.
     The concept of functional hearing is more useful in determinations of fitness for duty.  It refers to auditory capability sufficient to maintain situational awareness and speech communication and perform other tasks that require audition.  An assessment of functional hearing requires testing or monitoring of task performance of an individual in the setting of concern.  Results obtained in a laboratory setting or using special controls may not be representative of performance in the task environment.  However, other data, such as audiometric and soundfield measurements are complementary and may have diagnostic value.
     Whether subject to an AFFD protocol or less-formal evaluation, an organization must ensure that “hearing-critical” tasks are assigned only to those with the auditory capabilities necessary to perform them successfully.  A hearing-critical task is one with the following three characteristics:
  • Successful performance of the task is a required component of the job.
  • No compensatory substitutes (e.g. experience, visual cues) can overcome auditory incapacity.
  • Failure to perform the task successfully creates a hazard for oneself or others.
Implicit in the terms functional hearing and fitness for duty is a higher standard for hearing-critical tasks than for more-mundane or routine tasks.
 
Signal-to-Noise Ratio
     Signal-to Noise Ratio (S/N or SNR) is a fundamental concept in communication system design.  The series, thus far, has focused on the noise, but effective communication requires attention also be paid to the signal and a key relationship between the two.
     Conceptually, S/N is the detectability of a signal in the presence of noise.  Mathematically, it is the ratio of signal power to noise power:  S/N = Wsig/Wnoise.  Following the convention of sound levels (see Part 3), S/N is typically expressed in decibels (dB) using pressure values:  S/N = 10 log (Psig/Pnoise) dB.  Fortunately, measurements are typically recorded in decibels, yielding the simple expression
          S/N = (SPLsig – SPLnoise) dB .
     Positive S/N values (ratios > 1.0) identify signals whose intensity exceeds that of the accompanying noise.  For example, an S/N of 10 dB indicates that the signal is 10 dB “louder” than the noise.  Conversely, negative S/N values (ratios < 1.0) identify signals of lower intensity than the noise.  An S/N of 0 dB (ratio = 1.0) indicates a signal and noise of equal intensity.
     One possible framework for the use of S/N is to treat the intensity of noise, with any controls (explored further in future installments) active, as an independent variable and signal intensity as a dependent variable.  Target S/N is the parameter used to determine the appropriate intensity of a given signal.  This simple relation yields a series of parallel lines, as shown in Exhibit 2.
     S/N is a simple but important concept, relevant to all forms of communication.  As such, it can be cited in reference to verbal (i.e. speech) and nonverbal communication, with or without the use of electronic equipment (e.g. telephone, radio, amplifier, loudspeaker, etc.)  When referencing speech communication, S/N may be called the speech-to-noise ratio.  The adjustment in terminology serves only to specify the type of signal under scrutiny; definitions and application do not change.

Masking
     Masking is a phenomenon that causes a signal or message to be more difficult to hear or decipher in the presence of other sounds.  Standard audiometric tests determine one’s absolute threshold – the lowest intensity at which a sound is audible in quiet.  The lowest intensity at which a sound is audible in the presence of other sounds is called the masked threshold.  The difference between the two – that is, the magnitude of the increase in hearing threshold – is the amount, or level, of masking caused by extraneous sound.
     “Extraneous sound” is often referred to as “noise” to simplify the presentation.  However, the common definition of noise – “unwanted sound” – may not be fully applicable.  In fact, several coincident sounds may be necessary, or “wanted,” such as warning signals or other feedback sounds.  In this context, a modified definition of “noise” is helpful:  “any sound other than the sound of current interest.”  This definition accounts for individual analysis of multiple sounds that cannot or should not be eliminated from the soundscape.  The “other” sound is also called the masking sound or masker.
     Masking of a signal can occur in several ways.  The most prevalent is direct masking, which occurs when the signal and masker have similar frequencies.  The area of the cochlea needed to process the signal (see Part 2) is preoccupied with the masker, possessing no capability for “attention shift.”  The signal cannot, therefore, be perceived as a distinct input.
     Whether pure tone, narrowband, or broadband, a masker’s influence extends beyond its constituent frequencies.  Frequencies lower than the masker are masked to some degree, but to a much lesser extent than frequencies higher than the masker.  This phenomenon is called the upward spread of masking and is the reason that high frequencies are more susceptible to masking than are low frequencies.
     Example masking curves, for a range of pure tone frequencies, are shown in Exhibit 3; the value on each curve is the level of the masking tone (frequency shown at top of each plot) above its threshold.  Several characteristics of the masking phenomenon can be seen in these plots of masking vs. frequency, including:
  • The upward spread of masking is seen in the rapid rise to maximum masking below the masking tone frequency and slow decay at higher frequencies.
  • Local minima at the masking tone frequency represent increased sensitivity due to the creation of beats (see Part 7).
  • Additional local minima occur at integral multiples of the masking tone frequency.  These are the harmonics for which sensitivity is greater than adjacent frequencies.
     The comparative curves shown in Exhibit 4 demonstrate that a band of noise is more effective as a masker than a pure tone.  Key takeaways include:
  • A band of noise raises thresholds near its center frequency much higher than does a pure tone of that frequency.
  • Effects of beating and harmonics are not significant when the masker is a band of noise.
  • At higher frequencies (> ~1000 Hz in this data set), the pure tone is a more-effective masker than narrowband noise.  The frequency at which this “crossover” occurs is dependent upon the degree of distortion in the listener’s ear.
     A high-frequency band of noise, at high intensity (> 80 dB), can mask pure tones at low frequency.  This phenomenon is known as remote masking.  It is believed to be a result of low-frequency distortion caused by overstressing the auditory system.  This effect can be reducing by filtering.
     Interaural masking occurs when one ear receives the signal while the other receives the noise.  A masking sound at a level at least 50 dB greater than the signal is needed for significant masking of this type to occur.
     Central masking occurs when sound incident on one ear raises the threshold of the opposite ear.  It is believed to be negligible and, accordingly, receives little attention.
     Adding noise is counterintuitive, but can provide a benefit in certain conditions.  If signal and noise are received in one ear, presenting the other with a 100-Hz-wide band of noise, unrelated to the signal or noise in the first ear, provides ~1 dB “release from masking.”  A release from masking is a lowering of masked threshold.
     Once signal intensity exceeds that of its masker by a few decibels, it seems as loud as it would in the absence of the masker.  Loudness of a signal increases more rapidly above its masked threshold in noise than it does in quiet.  These points are demonstrated by the converging curves in Exhibit 5.
     The preceding discussion of masking focused on various effects of frequency on the audibility of signals, but there is also a temporal component.  Forward and backward masking refer to an increase in the threshold of a signal caused by a sound occurring before or after it, respectively.
     Forward masking – when a signal follows a masker – is somewhat intuitive.  The cochlea must be “freed” from its prior stimulation in order to process the next.  Though brief, this refractory period should not be ignored.
     Backward masking – when a signal precedes a masker – is much more difficult to comprehend.  It involves complex interactions in the auditory system, the exploration of which is beyond the scope of this series.  For purposes of this discussion, it is accepted as a genuine phenomenon supported by research detailed in cited references.
     A graphical representation of forward and backward masking is provided in Exhibit 6.  The break in the graph represents a 5-ms-duration “probe tone” (signal).  Backward masking of the tone is presented, in “negative time,” to the left of the break and forward masking to the right.  The smaller threshold shift experienced with dichotic presentation (signal in one ear, masker in the other) further demonstrates the advantage of binaural listening.
     When the sequence of auditory inputs is important, sufficient delay must exist between signals to allow the listener to determine which occurred first.  With a 2 – 3 ms delay, two distinct signals can be recognized, but the sequence is indeterminable.  A 10 – 20 ms delay is required to correctly identify the sequence of two sounds received.
 
     While pure tones can be generated for use as warnings and other auditory signals, they are not the norm in naturally-occurring soundscapes or occupational settings.  Bands of noise are more-common competitors for listeners’ “auditory attention.”  The most complex, and often most important, signals are contained in speech communication.  Speech is subject to S/N and masking concerns, as are pure tones and other signals, but experiences additional challenges; these are explored in the following sections.
 
Audibility and Intelligibility
     For many sounds, such as pure tones or narrowband sounds used as warning signals, mere audibility is sufficient to serve its intended purpose.  If there is a relatively large number of signals to be monitored, rapid discrimination among them becomes more challenging.  When speech communication is needed, there is a much higher bar to be cleared; in addition to being audible and discriminable, speech must also be intelligible to serve its purpose.
     The expansion of telephony from commercial enterprises to personal use and its subsequent proliferation provided great impetus for the study of speech intelligibility.  Over the past century, several test procedures, media sets, and evaluation schemes have been developed to quantify performance of communication technologies.  Though the study of intelligibility originated in telephony, face-to-face communication is subject to similar challenges and can be assessed in similar fashion.  Use of an electronic or other intermediary device may improve or degrade intelligibility, but it does not alter the requirements for effective communication.
     Some methods of assessment and scoring are rather sophisticated and complex.  Reproduction of lengthy procedures is not warranted; readers are encouraged to consult cited references or other sources for additional detail.  In lieu of comprehensive instructions, some prominent indices are introduced to provide conceptual understanding of intelligibility testing and scoring.  Conceptual understanding is sufficient to recognize the influence of a soundscape on communication system design choices and vice versa.  For those that choose to perform calculations, dedicated software and formatted spreadsheets are available to assist in this effort from sources such as the Acoustical Society of America (ASA).
 
     Articulation Index (AI) is the benchmark to which other intelligibility indices are typically compared.  Calculation of AI is a laborious process, requiring a series of data plots and correction factorings.  This follows the choice of method to be used, based on the data available or precision desired.  Its complex calculation process and the limited value of additional precision in most occupational settings prompts a focus on alternative methods to estimate AI.
     AI ranges from 0 to 1.0, expressing the proportion of a speech signal that is audible or “available to” a listener.  The portion of a speech signal that is available to a listener is that which contributes to the listener’s understanding of the message.  The relationship of AI to the proportion of signals correctly understood is not a 1:1 correlation, however.  As seen in Exhibit 7, an AI of 0.5 can yield comprehension rates at or near 100%, provided the signal content (vocabulary) is sufficiently limited or additional cues are provided.  The high rate of sentence comprehension is afforded by contextual clues inherent in extended messages, even when unfamiliar to the listener (i.e. first presentation).  Performance for all media sets shown in Exhibit 7 exceeds 50% comprehension by significant margins at AI = 0.5.
     The “overperformance” of speech comprehension, relative to AI, is attributed to the amazing powers of the human brain.  With knowledge of the language in use, the brain can extrapolate small portions of the message that were not received clearly.  This is not faultless, of course, or comprehension scores would consistently be 100%.  In casual conversation, where the consequences of misunderstanding are minimal, these extrapolations can lead to rather humorous exchanges.  In consequential communications, however, messages should be crafted such that any extrapolations necessary have a high probability of correctness.
     To give AI values intuitive meaning, a qualitative guideline is often used.  A typical example is as follows:
  • AI < 0.3:  generally unsatisfactory except in very limited circumstances, such as small vocabulary, highly-skilled listener, etc.
  • 0.3 < AI < 0.5:  generally acceptable performance.
  • 0.5 < AI < 0.7:  good; satisfactory performance.
  • AI > 0.7:  very good to excellent performance.
Users may choose to modify the qualitative assessment guideline to reflect the realities of specific applications.  For example, use of an unlimited vocabulary by unskilled listeners may shift acceptability up the AI scale.
 
     Speech Interference Level (SIL) is less precise than an AI calculation; it is used to predict intelligibility of speech in face-to-face communications.  SIL is the maximum noise level in which a listener correctly understands 75% of phonetically balanced (PB) words or ~98% of sentences; this comprehension rate is equivalent to AI ≈ 0.5.  PB words are those included in a test set such that various speech sounds occur in the same proportion as “normal” speech.
     Mathematically, SIL is the arithmetic average of ambient SPLs in the octave bands 600 – 1200 Hz, 1200 – 2400 Hz, and 2400 – 4800 Hz.  SIL varies by the speaker’s vocal effort and distance from listener; several combinations of these variables are tabulated in Exhibit 8.
     Preferred Speech Interference Level (PSIL) is used to predict the likely level of difficulty using speech to communicate in various circumstances.  PSIL is the arithmetic average of ambient SPLs in the octave bands with center frequencies of 500, 1000, and 2000 Hz.  Exhibit 9 provides a graphical reference relating PSIL, distance between speaker and listener, and vocal effort to anticipated speech communication difficulty.  It also includes a convenient cross-reference to SIL, A- and C-weighted SPLs, and perceived noisiness values as alternative metrics.  Estimates of AI at each level of vocal effort are also tabulated, providing additional predictive insight.  The chart indicates where noise-reduction efforts may need to be focused or communication system upgrades implemented.
     Speech Intelligibility Index (SII) is the most sophisticated index commonly available.  It has been adopted in the ANSI S3.5-1997 (R2020) standard, outlining four calculation methods.  The details of SII calculations will not be reproduced here; readers are referred to the ANSI standard, available software, and other resources for that information. 
     Interpretation of SII and AI values are comparable; both range from 0 to 1.0, though SII is often cited as a percentage.  Both indices are “outperformed” by speech comprehension over much of this range.  Using comparable test sets, SII and AI results are approximately equal.  For example, the ~98% comprehension rate of sentences at AI = 0.5 is duplicated at SII = 0.5.  This can be seen in Exhibit 10, as well as the comprehension rates as a function of SII for other test sets.  As seen in Exhibit 7 for AI, Exhibit 10 shows that simpler vocabulary and additional clues provided by sentences improves comprehension at lower SII values.
     In lieu of intensive calculations, visual estimation procedures have been developed.  Killion and Mueller’s revised “count-the-dots” method incorporates research on the importance of frequencies outside the 500 – 4000 Hz range that is often the focus of speech communication studies.  It has also been adjusted to correlate with SII calculations (1/3 octave importance function) and is now titled “The SII-Based Method for Estimating the Articulation Index.”
     The procedure is as follows:
  • Perform audiometric tests using frequencies from ~200 Hz to 8000 Hz.
  • Plot the results on the audiogram form shown in Exhibit 11.
  • Count the number of dots below the threshold line; these are called the “audible dots.”
The number of audible dots approximates the AI of speech at 60 dB SPL.  For example, if 45 of the 100 dots on the audiogram lie below the threshold line, AI ≈ 0.45.  This method provides an indication of where communication difficulties may arise while being far simpler to execute than a precise AI or SII calculation.
     It should be clear by now that intensive calculations are often unwarranted overkill in occupational settings.  Variability is introduced by changes in personnel and daily operations; estimates may be the only data available in a reasonable timeframe.  AI and similar indices are typically used as indicators, where approximations and trends are more useful than precise values.  This in no way diminishes the importance of understanding the concept of intelligibility and how it influences communication system design; it is merely an acknowledgment that a more-practical approach is needed to accommodate resource limitations that exist in most workplaces.
 
Factors Related to Intelligibility
     The previous sections provided background information related to challenges involved in communicating in occupational soundscapes.  The presentation now turns to examples that connect these concepts to practical application in system design.
     The general “rule” for signal-to-noise ratio is higher is better; however, there are limitations.  In general, those with existing hearing loss (HL) require higher S/N to match the comprehension rates of those with normal hearing.  In “low-noise” situations, however, higher signal intensity may be unnecessary and can become annoying or otherwise detrimental.  For example, high-intensity sound induces distortion in the ear, decreasing intelligibility for all listeners. 
     SII and other indices were developed for normal hearing.  The influence of HL on intelligibility could vary greatly, depending on the nature and severity of hearing loss, the makeup and intensity of the soundscape, and characteristics of the speech signals.
     The vocabulary used in speech communication can have a profound impact on its effectiveness (see Exhibit 7 and Exhibit 10).  Variables that influence vocabulary effectiveness include the number of words in use (i.e. standardized or free-form), the number of syllables in each word, the uniqueness of words used (e.g. rhymes), and the context in which they are spoken.
     Similar words can be difficult to differentiate in random noise due to “consonant confusion.”  The “confusion tree” in Exhibit 12 shows the S/Ns at which various consonant sounds become indistinguishable.  Two adjacent lines indicate that, at S/Ns below their level of convergence, the corresponding consonant sounds are easily confused.  Filtering the speech signal alters the confusion tree; all components of a communication system must be considered in conjunction to achieve desired results.
     Dialects add an interesting variable to speech communications.  Imagine a meeting with one attendee from each of the following cities:  Boston, Houston, London, Dublin, Sydney, and Mumbai.  All are fluent in English, the native language of each.  Each speaks the language differently, however, stressing different speech sounds, pronouncing words differently, and defining words differently.  Add to this scenario high-intensity noise, poor reproduction of vocal inputs to an electronic communication device, and speakers of English as a second language (ESL) and the value of a limited, standardized vocabulary becomes self-evident.
     Communication at large gatherings can be difficult.  While “listening” to one voice, other voices in the vicinity create masking noise.  The effect on intelligibility is shown in Exhibit 13 for a voice of interest held constant at a level of 94 dB.  With one masking voice, “selective attention” facilitates relatively high comprehension rates – nearly 80% at S/N = 0 (vertical dashed line).  Additional voices degrade comprehension at significantly higher rates.  The data on masking voices provides empirical evidence of the productivity-crushing effects of sidebar conversations and unmoderated “debates” in meetings (see “Meetings:  Now Available in Productive Format!” [18Dec2019]).
     When a speaker must increase vocal effort to be heard above noise, intelligibility can suffer.  Increasing vocal effort to shouting levels (> ~80 dB) can result in 20% lower comprehension rates at constant S/N = 0.  At lower S/N, shouting degrades comprehension more rapidly despite starting at a lower baseline rate.  The decline in comprehension rates when low vocal effort (< ~50 dB) is used is essentially a mirror image.
     Acoustic properties of a room in which communication takes place can exacerbate other difficulties.  Reverberant properties can cause echoes or hamper the dissipation of sound energy required to “free” a listener’s auditory system to process a new signal.
     Face-to-face communication can enhance intelligibility relative to the same message recorded or transmitted electronically.  Vocal inflections are undistorted by reproduction and may aid comprehension of the message.  In addition, visual cues are readily available, such as facial expressions or “body language.”  The additional signals, in some cases, can convey more information than the message itself, particularly among highly-familiar or highly-skilled communicators.  The ability to see a speaker’s lips, even if the listener is not a skilled lip-reader, has been found to improve intelligibility significantly in negative-S/N conditions.
     Much of the research conducted on speech communication, hearing, and related topics has involved only men.  Differences between male and female speech and hearing are believed to be significant, but the details are not well-established.  This serves as yet another reminder that every environment is unique, requiring validation of systems within each.
 
     Vast amounts of research have been conducted on the influences of noise on communication, particularly speech communication.  Sharing details of this research yields diminishing returns as explorations become more peripheral or less practical to employ in an occupational setting.  The preceding presentation is akin to a high-speed flyover of the subject matter, highlighting only the most-relevant and practically-applicable information.  However, readers are encouraged to explore the literature on this interesting and valuable subject.  “The Third Degree,” meanwhile, will proceed to a presentation of recommendations for design of effective communication systems.

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications; 1996.
[Link] The Effects of Noise on Man.  Karl D. Kryter.  Academic Press; 1970.
[Link] Human Engineering Guide to Equipment Design (Revised Edition).  Harold P. Van Cott and Robert G. Kinkade (Eds).  American Institutes for Research; 1972.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc.; 2004.
[Link] Fundamentals of Industrial Ergonomics, 2ed.  B. Mustafa Pulat.  Waveland Press; 1997.
[Link] Engineering Noise Control – Theory and Practice, 4ed.  David A. Bies and Colin H. Hansen.  Taylor & Francis; 2009.
[Link] An Introduction to Acoustics.  Robert H. Randall.  Addison-Wesley; 1951.
[Link] “Protection and Enhancement of Hearing in Noise.”  John G. Casali and Samir N. Y. Gerges.  Reviews of Human Factors and Ergonomics; April 2006.
[Link]  “On the Masking Pattern of a Simple Auditory Stimulus.”  James P. Egan and Harold W. Hake.  The Journal of the Acoustical Society of America; September 1950.
[Link] “Methods for the Calculation and Use of the Articulation Index.”  Karl D. Kryter.  The Journal of the Acoustical Society of America; November 1962 and Errata [Link].
[Link] “Pediatric Audiology:  A Review.”  Ryan B. Gregg, Lori S. Wiorek, and Joan C. Arvedson.  Pediatrics in Review, July 2004.
[Link] “Signal-to-noise ratio.”  Wikipedia.
[Link] “An Easy Method for Calculating the Articulation Index.”  H. Gustav Mueller and Mead C. Killion.  The Hearing Journal; September 1990.
[Link] “Twenty years later: A NEW Count-The-Dots method.”  Mead C. Killion and H. Gustav Mueller.  The Hearing Journal; January 2010.
[Link] “The Speech Intelligibility Index: What is it and what's it good for?”  Benjamin Hornsby.  The Hearing Journal; October 2004.
[Link] “SII Predictions of Aided Speech Recognition.”  Susan Scollie.  The Hearing Journal; September 2004.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 8:  Effects of Exposure]]>Wed, 24 Jan 2024 07:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-8-effects-of-exposure     Some effects of exposure to sound with certain characteristics have been mentioned in previous installments of this series.  Given the importance of understanding the potential consequences of failing to manage soundscapes effectively, compiling these in one place is advantageous.  The effects of exposure to challenging soundscapes provide the “why” that motivates efforts to manage them.
     In this installment, both auditory and extra-auditory effects are explored.  Auditory effects may be more intuitive, as direct impacts to hearing are highly relatable.  Extra-auditory effects, in contrast, often lack an obvious link between sound and the effects experienced by those exposed.  Recognizing this link is key to effective facility and workforce management.
     The term “auditory effect” is used herein to refer to physiological changes that directly impact an individual’s ability to perceive (“hear”), identify, differentiate, locate, or interpret various sounds.  These changes take place within the auditory system, anywhere from the outer ear to the brain.
     Extra-auditory effects are those that occur outside the auditory system; they can be physiological or psychological in nature.  The term “nonauditory effect” is also commonly used; the two are used interchangeably in this discussion.

Auditory Effects
     Several references to auditory effects have been made as the primary focus of this series.  However, this should not lead readers to conclude that it is only these that are important.  It is their relevance to previous discussions, prevalence in occupational settings, and relatability that encourage frequent mention.
     In this section, brief descriptions of the most common auditory effects are provided.  Those previously mentioned are included here to provide a single resource for this type of information.
Noise-Induced Hearing Loss (NIHL):  results from damage to the inner ear, specifically, “over-bending” the hair cells in the cochlea.  Trauma suffered by the tympanic membrane (i.e. “ruptured eardrum”) can also cause hearing loss.  NIHL refers to both temporary and permanent threshold shifts (TTS, PTS).  Refer to Part 2 for a presentation of the parts of the ear and Part 5 for a discussion of threshold shifts.  Experiencing a TTS is sometimes called “auditory fatigue,” but the term’s ambiguity limits its acceptance and value as a descriptor.  The greatest NIHL typically occurs in the first 10 – 15 years of exposure.  After this, its impact declines as other forms of hearing loss become more influential to overall auditory health.
Tinnitus:  ringing in the ear(s); the perception, usually, of a high-pitched sound that is not externally generated (i.e. not “received” by the “listener”).  Typically, tinnitus accompanies hearing loss and is often the warning sign that prompts individuals out of their complacency about sound exposures.
Hyperacusis:  extreme sensitivity to sound, often occurring in the aftermath of a traumatic “noise event.”  Hyperacusis can be difficult to diagnose and manage, as those that suffer from it often generate “normal” audiograms.  That is, the increased sensitivity to sound exposure is not detected by standard audiometric testing.
Recruitment:  narrowing of the audible range of SPLs that may accompany hearing loss.  A person with recruitment has a raised hearing threshold and increased sensitivity to higher-intensity sound.
Diplacusis:  asymmetrical hearing loss, typically caused by differential exposure.  For example, a machine operator whose task posture causes his/her left ear to “face” the noise source is susceptible to greater NIHL in the left ear than in the right.  As this condition worsens, localization of a sound source (see Part 6) becomes increasingly difficult.
Acoustic trauma:  sudden damage to the auditory system caused by an explosion or similarly-extreme release of energy in excess of 130 dBC.  Acoustic trauma is associated with transient sounds, whereas NIHL is associated with continuous and intermittent sounds (see Part 6 for descriptions of sound types).
Speech discrimination:  the ability to differentiate speech sounds is effected by total hearing loss and the nature of one’s soundscape.  Consonant sounds (e.g. “b” vs. “d;” see Part 5) are effected most and background noise is particularly problematic.
     There are two types of auditory system damage to consider in relation to auditory effects:  mechanical injury and metabolic injury.  Mechanical injury correlates with peak pressure levels (SPLs); the most severe conditions exist with transient sounds.  Metabolic injury correlates with the duration of exposures and corresponding recovery periods.  Both types must be considered to effectively manage a soundscape.
     The preceding presentation is merely an overview of the most-commonly cited auditory effects of sound exposure.  Sound level measurements, exposure indices (e.g. 8-hr TWA), and frequency weighting (see Part 6) are used to quantify the potential for NIHL.  Individual sensitivities will always differ, however; this variability must not be ignored.

Extra-Auditory (Nonauditory) Effects
     Previous references to extra-auditory effects have been made without clarifying this distinction; for example, annoyance was discussed in Part 7.  Annoyance is a subjective experience and a psychological response to sound that may occur at relatively low intensity levels.  This exemplifies an important characteristic of nonauditory effects of sound:  they are not correlated with auditory system damage and often occur at energy levels much lower than required to cause a threshold shift.
     Several physiological and psychological effects of sound exposure are introduced in this section.  Thorough analysis of these effects requires medical and/or psychological expertise far beyond the scope of this series.  Fortunately, for most practitioners, awareness and superficial knowledge of these effects suffice; detailed research is left to those with the interest and capacity to conduct it independently.
     Sound exposure has been identified as a significant stressor.  Readers are likely aware of health concerns related to excessive stress, such as hypertension and other cardiovascular conditions.  It has also been linked to issues in the digestive system, such as ulcers, and sleep disturbance, which can lead to a further decline in health.
     A natural response to sound exposure – namely, shouting to communicate – can also cause an indirect health effect.  Throat pain, lesions, and hoarseness can result from exerting the vocal energy required to be heard in a noisy environment.
     Behavioral changes have also been linked to sound exposure.  Absenteeism and disciplinary action have been correlated to levels of sound exposure in workplaces.  Depression and social isolation are experienced within the work environment and outside it, particularly when substantial permanent threshold shifts (PTS) have occurred.  Cognitive decline and dementia can also be accelerated.
     Productivity, quality, and safety performance also suffer in noisy environments.  Increases in falls and other mishaps, including traffic accidents, have been correlated with sound exposure.  The combined effect of distraction by sound and difficulty hearing warning signals or other auditory feedback contribute to the occurrence of various types of accidents and errors.  Vigilance tasks, where changes in conditions are to be closely monitored, exhibit significant declines in performance in this type of environment.  Learning, typically used to improve performance on various metrics, can also be impaired by the soundscape.
     Although the nonauditory effects have not been explored in great detail, the presentation should, nonetheless, make clear that a multitude of negative impacts can result from an uncontrolled or poorly-managed soundscape.  The connections between sound exposure and nonauditory effects are not as intuitive as those to auditory effects and could, therefore, easily be overlooked.  Mere awareness of the potential to cause or contribute to these ailments could be the key to successful management.

Interactions and Synergies
     Sound exposures and NIHL do not occur independently of other conditions.  Other characteristics of the surrounding environment and of the exposed individual can amplify, accelerate, or otherwise modify the effects of sound exposure.  Individual sensitivity to any single factor, or combination of factors, is highly variable and extremely difficult to quantify or predict.  For this reason, awareness, rather than deep knowledge, remains the goal of this presentation.
     Environmental factors to be considered in conjunction with the soundscape include:
·     Ototoxic substances (solvents, heavy metals, etc.) affect the cochlea.
·     Neurotoxic substances affect the central nervous system.
·     Vibration.
·     Thermal conditions (see Thermal Work Environments series).
·     Verbal and nonverbal communication requirements.
     Individual factors to be considered in conjunction with the soundscape and environmental factors include:
·     All types of hearing loss (NIHL, presbycusis, sociocusis, etc.).
·     Overall health and fitness (cardiovascular health, for example, seems to exhibit a circular influence – that is, a decline in CV health increases sensitivity to sound exposures and sound exposures increase CV risk).
·     Illness, infection, or chronic disease (e.g. diabetes).
·     Diet and exercise.
·     Smoking or other tobacco use.
·     Alcohol consumption.
·     Use of OTC or other medications, drugs.
Specific concerns about any of these factors should be referred to medical professionals.  Professional medical advice may be needed to provide appropriate protections for individuals working in challenging soundscapes.

      Thus far, the focus has been on implications of a “noisy” environment.  The effects of a “quiet” environment should also be considered.  Very low levels of continuous sound could exacerbate startle reactions (see Part 7), as changes in sound are more noticeable.
      A very quiet environment can be relaxing, causing drowsiness, increasing error rates, and reducing productivity.  To offset this, some introduce random noise or music to the environment.  Either of these can be problematic and, in general, are not recommended.  If either is used, careful evaluation of the task, environment, communication requirements, and effect on others must be conducted to ensure that problems are not created unnecessarily.  Expending the analysis effort on other aspects of job design will typically yield more favorable results.  For example, adaptive workstations, such as those that accommodate both standing and seated postures, assignment rotations, or task modification to reduce monotony may yield better results than the introduction of an artificial soundscape.

Summary and Conclusion
     There are many potential effects of sound exposure.  Some are physiological and quantifiable, while others are psychological and sometimes perplexing.  These effects are often interconnected, exhibiting complex relationships that inhibit thorough comprehension.  This in no way diminishes their applicability to workforce and facility management, as superficial knowledge is often sufficient to implement appropriate protections.
     Examples of the effects of sound exposure at various intensity levels are summarized in the concluding exhibits.  In Exhibit 1, effects on conversation and perception are presented, while Exhibit 2 presents additional physiological and psychological responses to sounds throughout the audible range.  The levels at which the onset of certain health effects typically occur are provided in Exhibit 3.  Finally, conditions affecting the relationship between signal presence and human perception of sound are explored in Exhibit 4.
     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).

References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Hearing Protection.”  Laborers-AGC Education and Training Fund; July 2000.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications; 1996.
[Link] The Effects of Noise on Man.  Karl D. Kryter.  Academic Press; 1970.
[Link] “Protection and Enhancement of Hearing in Noise.”  John G. Casali and Samir N. Y. Gerges.  Reviews of Human Factors and Ergonomics; April 2006.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc.; 2004.
[Link] Fundamentals of Industrial Ergonomics, 2ed.  B. Mustafa Pulat.  Waveland Press; 1997.
[Link] “Risk Observatory Thematic Report - Noise in Figures.”  Elke Schneider, Pascal Paoli, and Emmanuelle Brun.  Luxembourg Office for Official Publications of the European Communities; December 2005.
[Link] “Noise exposure as related to productivity, disciplinary actions, absenteeism, and accidents among textile workers.”  Madbuli H. Noweir.  Journal of Safety Research; Winter 1984.
[Link] “Extra-Auditory Effects of Noise as a Health Hazard.”  Joseph R. Anticaglia and Alexander Cohen.  American Industrial Hygiene Association Journal; May-June 1970.
[Link] “Noise Exposures:  Effects on Hearing and Prevention of Noise Induced Hearing Loss.”  Sally L. Lusk.  American Association of Occupational Health Nurses Journal; August 1997.
[Link] “Hazardous Exposure to Intermittent and Steady-State Noise.”  K.D. Kryter, W. Dixon Ward, James D. Miller, and Donald H. Elderedge.  Journal of the Acoustical Society of America; March 1966.
[Link] “Occupational Noise-Induced Hearing Loss.”  Raul Mirza, Bruce Kirchner, Robert A. Dobie, and James Crawford.  Journal of Occupational and Environmental Medicine; September 2018.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 7:  Perceptions]]>Wed, 10 Jan 2024 07:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-7-perceptions     The previous installment (Part 6) of the series dealt with objective measures of sound exposure.  Objective measurements, however, do not fully describe one’s experience of the surrounding soundscape.  The subjectivity of human perceptions of sound plays a vital role in effective noise control and communication system design.
     This installment of the “Occupational Soundscapes” series explores aspects of the human experience of sound that SPLs and TWAs alone do not explain.  These include the concepts of “loudness” and “noisiness” – terms that reflect the subjective nature of sound.
Loudness/Level
     “Loudness” is the term used to describe the subjective perception of a sound’s intensity or pressure.  Sometimes called a “loudness index,” loudness is quantified in sones, a linear scale defined as follows:
  • One sone is the loudness of a 1000-Hz pure tone at a sound pressure level (SPL) of 40 dB (the “reference tone”).
  • A sound of any frequency and SPL perceived to be of equal loudness to the reference tone is 1 sone.
  • A sound “twice as loud” as the reference tone is 2 sones, a sound three time as loud is 3 sones, and so on.
  • A sound that is “half as loud” as the reference tone is 0.5 sone, a sound one-quarter as loud is 0.25 sone, and so on.
  • The threshold of hearing at 1000 Hz is 0 sones.
     The loudness of a sound has a corresponding “loudness level.”  The loudness level is analogous to the power, intensity, and pressure levels introduced in Part 3, though it is not derived in the same way.  Loudness level is also defined for a reference tone in units of phons.  A sound’s loudness level, in phons, is numerically equal to the SPL, in decibels (dB) of a 1000-Hz tone of equal loudness.  The reference tone (1000 Hz, 40 dB) has a loudness level of 40 phons by definition.  The loudness level is conceived as a perceived or “equivalent” SPL, as it ascribes an equivalent experience to a subject sound and the reference tone.
     Contours of equal loudness and loudness level are shown in Exhibit 1 for pure tones; Exhibit 2 presents curves for equal loudness levels of octave bands.  Alternative representations, using 1/3 octave band curves, are shown in Exhibit 3 and Exhibit 4.  These may facilitate estimation of perceived loudness in some applications, depending on the data available.  Equations have been developed to convert between perceived loudness/level and SPL.  Such calculations will not be presented here, however, as their value is dubious; the precision of a calculated value can be misleading.  For example, citing an SPL can obscure the fact that is was “converted” from subjective assessments, implying that knowledge of the sound is more “concrete” than it really is.
     To further illustrate the potential for confusion, consider the inverse square law example presented in Part 3.  There, it was shown that doubling the distance from a sound source reduced the sound’s intensity by a factor of 4 and its intensity level by 6 dB.  This equates to a reduction in sound pressure by a factor of 2, achieving an SPL reduction of 6 dB.  However, this is not the same as the change in loudness level.  A change of approximately 10 dB is needed to perceive a change in loudness by a factor of 2, as shown by the relationship of sones to phons (i.e. an increase of 10 phons doubles the sones) depicted in Exhibits 1 – 4.
     The curves represent the average responses of research subjects.  Human subjectivity, as always, renders the curves approximations or estimates of individuals’ actual experience.  Nonetheless, these estimates are useful references for communication system and noise control designers.
 
Noisiness/Annoyance
     The terms perceived noisiness and annoyance are often used interchangeably to describe the extent to which a sound is unwanted, unacceptable, or bothersome.  Analogous to the sone for loudness, noisiness has been assigned the unit of noy, where a 2-noy sound is twice as noisy as a 1-noy sound, a 3-noy sound is three times as noisy, and so on.  The perceived noise level (PNL) is “translated” into units of PNdB according to the following:  PNdB = 40 + 10 log2 (noy).  There is also a conversion scale on the right side of Exhibit 5, which provides equal-noisiness contours.  These curves were originally developed by assessing the sound of aircraft flyovers; as such, they may not be broadly applicable to occupational settings.  The caveats offered in regards to loudness are also applicable to noisiness.
     The concept of noisiness and techniques for its assessment have been developed to an extent far beyond the scope of this series.  In lieu of mastering the nuances of “synthetic” measurement units, it may be of greater practical value to bear in mind the characteristics of sound that contribute to annoyance.  Five parameters have been identified as significant factors in noisiness assessments:
  • Spectral composition and levels – this relates to frequency weighting scales (e.g. dBA, dBC) and equal-noisiness curves.
  • Spectral complexity – this refers to concentrations of energy within a broadband sound spectrum.
  • Duration of exposure.
  • Time of increasing levels prior to reaching maximum (continuous sounds).
  • Level increase within a 0.5-second interval (impulsive sounds).
Awareness of these factors facilitates effective soundscape management, even if perceived noisiness (PNdB or other unit) numbers are unknown.
 
Frequency/Pitch
     The relationship between frequency and the perception of pitch, first mentioned in Part 2, has been formulated in a fashion similar to that between SPL and loudness.  Again, a reference tone of 1000 Hz at 40 dB SPL is used; it is assigned a value of 1000 mels.  A sound perceived to be twice the pitch of the reference tone is 2000 mels, and one-half the pitch of the reference tone is 500 mels.
     The sensitivity of human perception to changes in sound also varies with its frequency.  Curves showing the variation in sensitivity to changes in frequency are shown in Exhibit 6 and to changes in SPL in Exhibit 7.  High sensitivity to pressure and frequency changes in the 2 – 5 kHz range contribute to speech intelligibility.
     The Doppler Effect is a frequency-related phenomenon.  It is experienced when a sound source and listener are in relative motion (either or both can be moving).  As the distance between source and listener increases, the pitch of the sound decreases, and vice versa.  In this case, the frequency of the emitted sound does not change; it is the relative motion that causes a change in perception.
     Other frequency-related phenomena include perceived frequency shifts due to extended exposure or high intensity, consonance and dissonance, and beatingFrequencies deemed “pleasant” in combination are said to be consonant, while those found objectionable are called dissonant.  This concept is applicable to annoyance and musical preferences, for example.
     Beating occurs when two sounds with similar frequencies are coincident, causing periodic reinforcement and cancellation.  The occurrence of beats can be used to identify mismatched operating speeds of equipment.  When two identical fans, for example, are synchronized, beating ceases.

Localization/Directionality
     A “binaural effect” allows humans to locate the source of a sound, or its directionality; this is called localization.  A sound may reach each ear at different times (“phase difference”) or at different intensity.  These differences are caused by the human body itself; this effect is known as the Head-Related Transfer Function (HRTF) among other names.  By analyzing the type and magnitude of perceived differences, the brain can determine the direction in which the source lies.

Other Perceptual Phenomena
     A sudden unexpected sound, particularly one of high intensity, can cause a startle reaction.  The startle itself may not be dangerous, but induced stress could cause uncontrolled movements or distraction that jeopardizes safety or task performance.  A sudden change in sound can produce a similar effect.
     High-intensity sounds (> 130 dB SPL), whether continuous, intermittent, or transient, can exceed the threshold of pain.  In addition to extreme discomfort and distraction in the short term, permanent hearing loss is likely to also occur.
     Another phenomenon of sound involves a lack of perception.  Ultrasonic and infrasonic “sounds” are those with frequencies above and below, respectively, the audible range.  Exposure to significant energy at these frequencies can cause discomfort or illness.  The inability to recognize the exposure often leaves victims baffled, though, in some cases, vibrational sensations may aid diagnosis of the discomfort.
     The concepts of masking and audibility vs. intelligibility are also perceptual in nature.  However, their relevance to communication warrants deferring detailed discussion to a future installment focused on the design of effective communication signals and systems.

In Conclusion
     The preceding discussion of auditory perception phenomena is merely a cursory introduction to the topic.  Expectations for direct application of the information presented are much lower than for other installments of the series.  The subjectivity of loudness, pitch, etc. requires substantial research to tailor a soundscape to a specific group of people.  Doing so proactively, while ideal, is beyond the capability of most commercial operations.
     Nonetheless, familiarity with perceptual variations is valuable to anyone that designs, maintains, or uses a communication system in an imperfect environment.  Even shallow knowledge of these concepts can aid in troubleshooting existing or potential performance issues.  With this knowledge, a focused project can be undertaken, requiring a manageable number of sound measurements and experiments to be conducted.

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).

References
[Link] Engineering Noise Control – Theory and Practice, 4ed.  David A. Bies and Colin H. Hansen.  Taylor & Francis; 2009.
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Hearing Protection.”  Laborers-AGC Education and Training Fund; July 2000.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] An Introduction to Acoustics.  Robert H. Randall.  Addison-Wesley; 1951.
[Link] The Effects of Noise on Man.  Karl D. Kryter.  Academic Press; 1970.
[Link] “Handbook for Acoustic Ecology.”  Barry Truax, Ed.  Cambridge Street Publishing, 1999.
[Link] “Protection and Enhancement of Hearing in Noise.”  John G. Casali and Samir N. Y. Gerges.  Reviews of Human Factors and Ergonomics; April 2006.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Year-End Reset]]>Wed, 27 Dec 2023 07:00:00 GMThttp://jaywinksolutions.com/thethirddegree/year-end-reset     Though spring is traditionally associated with cleaning and refreshing one’s surroundings, doing so during the year-end transition can provide significant advantages.  A range of possibilities exist in our physical, digital, and mental spaces to reduce clutter and stress while increasing value and productivity.
2023-12-27_year-end_reset.mp3
File Size: 9554 kb
File Type: mp3
Download File

Advantages
     While personal obligations may be plentiful due to holiday festivities, shopping, and other related activities, for many, the pace of our professional lives slows during this time.  Many non-retail businesses slow down, or even close, for a period at the end of the year.  This less-chaotic time provides opportunities to complete tasks that are often neglected during more-hectic periods.  The more frequently these tasks are performed, the easier they become.  A virtuous cycle is created as the tasks become less intrusive and, therefore, more likely to be accommodated in a busy schedule.
     A year-end reset consists of three activity types:  cleaning, organizing, and planning.  Performing each activity in all of our physical, digital, and mental spaces creates a healthier, more productive environment in which to work and live.  If the calendar year and fiscal year do not coincide, it may be appropriate to perform the reset activities at the end of the fiscal year; there is no “bad time” to do so.

Physical Spaces
     The physical spaces in which we live, work, and play often become cluttered and disorganized.  Spaces where these parts of our lives intersect are particularly susceptible.  Those that have adopted 5S at work often find the method useful at home, as well.  It is usually informal, lacking a documented standard, but application of the concepts is evident.  [See “Safety. And 5S.” (22Feb2023) for more information.]
     An entire home may benefit from a reset, but, typically, a few areas have greater impacts than the rest.  If working from home, the home office is an obvious target.  The state of one’s kitchen may play a large role; an organized kitchen can reduce the burden of preparing meals, lowering overall stress.  A tidy entryway can ensure that car keys and an umbrella are easily located, facilitating a smooth start to the morning commute.
     One’s office, workbench, toolbox, garden shed, garage, gym locker, or any other space can benefit from a reset.  Physical spaces that are not in fixed locations should also be reset occasionally, such as one’s car, wallet, briefcase, or handbag.  Anywhere that physical objects are used or stored are candidates for a reset.

Digital Spaces
     The number of digital spaces we inhabit grows effortlessly.  If not careful, we can find ourselves on many email lists and signed in to several online services without even realizing it.  This, in addition to all of the legitimate uses for which needs change, is why digital spaces should be reset.  Cleaning up one’s digital presence is also an important protection against various types of fraud.
     Digital spaces include any device with onboard memory, internet connectivity, WiFi, Bluetooth, or any other communication technology.  Also included are any online services, such as email, social media, and streaming services.  Applying 5S principles to digital spaces is highly recommended; additional steps are also advised, including:
  • Back up data to prevent loss or corruption.
  • Update software – operating system, apps, etc.
  • Activate device and account security measures – antivirus software, strong passwords, etc.
     Achieving “inbox zero” has become a popular objective; one way of achieving this is to organize messages in folders.  A digital space reset should include purging obsolete folders, whether archiving in a project file, for example, or simply deleting those that are no longer needed.
     Mobile devices are prone to acquiring unique collections of clutter.  Unneeded photos and videos, obsolete text messages and voicemails, unsupported or unused apps, and other flotsam hinder device performance and user productivity.

Mental Spaces
     One’s mental spaces are the most complex and least influenced by others’ attempts to help.  Mental spaces house knowledge, thoughts, beliefs, fears and insecurities, desires and goals – the entire conscious and unconscious mind.  Suggested subjects for reflection include:
  • Lessons learned.  These need not be limited to professional endeavors; personal revelations and improved hobbyist techniques, for example, are also worthy of recognition.
  • Progress toward objectives.  Assess professional, personal, financial, and fitness goals, among others, periodically.
  • Self-satisfaction survey.  Survey your own satisfaction with various components of your life, without regard for the opinions of others.  Considering career, education, residence, lifestyle, etc. can clarify goals and provide motivation to make changes necessary to achieve them.
  • Relationships.  Perhaps the trickiest subject to broach, even in an internal dialogue, is the quality of one’s relationships and the impacts they have on one’s life.  Whether personal or professional, casual or close, relationships play a huge role in a person’s well-being.  If these seem out of balance, a decision may be needed to confront a lingering issue, end an unhealthy relationship, or deepen a desirable one.

     Some of the activities described overlap.  Execution of a year-end reset is quite personal, even in physical spaces; the highly-individualized nature of the activity renders an attempt to provide an exhaustive set of examples futile.  Therefore, many examples were eschewed in favor of encouraging readers to explore their own spaces with an open mind.  The hope is that a less-prescriptive treatment of the topic leads to better outcomes for individuals than could be predicted or illustrated by examples.

     To make suggestions for a more-effective reset, feel free to leave a comment below.  For additional guidance or assistance with Operations challenges, leave a comment, contact JayWink Solutions, or schedule an appointment.

Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 10:  The Search for a Universal Index]]>Wed, 13 Dec 2023 07:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-10-the-search-for-a-universal-index     Over the past century, many researchers have attempted to quantify the physiological impact of high- and low-temperature environments (see Part 4 and Part 8, respectively).  As knowledge of human biometeorology increased and computational tools became more powerful, the models used became much more sophisticated.  However, models are typically focused on one type of environment – hot or cold – requiring use of multiple indices to accommodate varying conditions.  Other shortcomings in thermal index formulations further limit their utility in highly-variable conditions.
     In this installment of the “Thermal Work Environments” series, indices presented earlier in the series are evaluated and compared.  Additional indices are also considered for recommendation in workplaces.  Finally, a universal index, valid across the foreseeable range of human environmental exposure, is presented.
Classification and Evaluation of Thermal Indices
     The classification and evaluation schemes summarized here are the work of C. R. de Freitas and E. A. Grigorieva (2015, 2017).  de Freitas and Grigorieva scoured scientific literature to catalog more than 160 thermal indices.  Characteristics of each index were used to sort them into eight categories, or classes, identified by letters A through H.  The following descriptions are used to differentiate the classes:
simulation device for integrated measurement;
  A:  single-parameter index;
  B:  index based on algebraic or statistical model;
  C:  proxy thermal strain index;
  D:  proxy thermal stress index;
  E:  energy balance strain index;
  F:  energy balance stress index;
  G:  special-purpose index.

     Each index was then scored on a 5-point scale for six evaluation criteria.  The criteria and scoring methods are summarized below.
     Comprehensiveness indicates the number of relevant variables accounted for in the index.  Any factor contributing to thermal stress or strain is a relevant variable, including air temperature, humidity, wind speed, insolation or other radiation exposure, metabolic rate, clothing and protective gear in use, etc.  Comprehensiveness is scored as follows:
   Each relevant variable = +1.
   Maximum score = 5.
     Scope indicates the range of conditions for which the index is valid.  Scope is scored as follows:
   A narrow range of conditions is covered = 1.
   A broad range of conditions is covered = 3.
   Both cool and cold conditions are covered = 4.
   Both hot and cold conditions are covered = 5.
     Sophistication indicates the theoretical soundness or empirical support of the index.  Sophistication is scored according to the classification scheme, described above, as follows:
   Classes A and B = 2.
   Class C = 3.
   Classes D and E = 4.
   Classes F and G = 5.
   Class H is scored according to the methods used (i.e. classes A to G, above).
     Transparency indicates the clarity and justification of the rationale underpinning the index.  Transparency is scored as follows:
   None of the terms used are justified = 0.
   Justifications for terms used are poor or weak = 1 or 2.
   Terms used are justified in most cases = 3.
   Terms used are justified = 5.
     Usability indicates the ease with which an index can be implemented and interpreted.  Evaluation is based on the presence or absence of three characteristics:
  1. Computational procedures are straightforward.
  2. Only ‘standard’ data are required.
  3. Outputs are easily interpreted.
Usability is scored as follows:
   The index exhibits none of the three characteristics = 0.
   The index exhibits only one of the three characteristics = 1.
   The index exhibits two of the three characteristics = 3.
   The index exhibits all three of the characteristics = 5.
     Validity indicates the degree to which the index value accurately reflects the physiological impact, or human experience, of environmental conditions.  Validity is scored as follows:
   The index has not been validated = 0.
   A rational index that has not been validated = 2.
   The index has been compared to one fully-validated index = 3.
   The index has been compared to multiple fully-validated indices = 4.
   The index is derived from or tested with empirical data = 5.
     The six evaluation criteria are deemed to be of equal importance and are equally weighted in the final score.  The total score for each index, therefore, is simply the sum of the six criteria scores.  Using the de Freitas and Grigorieva scheme, scores can range from 4 to 30; indices with higher scores are expected to be more useful.
 
Heat and Cold Indices Revisited
     The merits of various indices used in hot (Part 4) and cold (Part 8) environments have been discussed.  The de Freitas and Grigorieva framework provides a consistent method of evaluation and comparison.  The summary table in Exhibit 1 presents the scores for heat and cold indices previously discussed.
     All of the heat indices listed in the summary table are direct indices, as discussed in Part 4.  However, identifying the “best” index for use in a hot environment is not as simple as finding the highest total score.  Although the scoring method weights each criterion equally, specific circumstances may influence practitioners’ preferences.  Also, scores may be disputed or obsolete, as the following examples demonstrate.
     Heat Index is given a comprehensiveness score of 5, though it could be argued that holding constant all but temperature (Tdb) and humidity (RH) reduces this to 2.  Updating its usability score of 3 may also be warranted.  De Freitas and Grigorieva (2017) state that “it is reasonable to argue that once user-friendly routines are made available (on a website, say) to run the calculations…, the usability would achieve a top score” in reference to another index.  This logic also applies to Heat Index, for which online calculators have long been available.
     Accepting both arguments changes the Heat Index score from 23 to 22.  While not a large change in total score, maintaining second rank on this list, the constituent scores give a different impression of this index.  Similarity of Heat Index and humidex may also inspire questions about the discrepancies in scores on these two criteria.
     Thermal Work Limit (TWL) provides another cautionary example.  It attained the highest score of the heat indices examined, scoring the maximum on four of the six criteria.  However, its narrow scope of applicability [36 – 40° C (96.8 - 104° F)] and data requirements (e.g. body dimensions) may preclude its use in some situations.
     Ranked third on our list by total score (20), Wet Bulb Globe Temperature (WBGT) is worthy of continued attention.  It is the basis on which ACGIH has set threshold limit values (TLVs) for heat stress.  The availability of instrumentation, simple calculations and estimation procedures seems to warrant a usability score of 5; instead, it has been scored a 3.  It should also be noted that WBGT’s comprehensiveness score of 3 applies to outdoor environments.  For indoor environments, dry bulb temperature (Tdb) is omitted, reducing the score to 2.

     Among the cold indices examined, The New Improved Wind Chill Index (Twc) and required clothing insulation (IREQ) tied with a high score of 26.  The tie-breaker, in occupational settings and other practical applications, is ease of use.  The simplicity of using Twc (usability = 5), whether calculated or estimated from a table, outweighs its deficiency in comprehensiveness by making consistent use more likely.
     The examples provided are not intended to be exhaustive or the conclusions to be absolute.  Reasonable people can disagree on individual scores or the method of scoring, particularly when reviewing a list as extensive as that compiled by de Freitas and Grigorieva.  It is important to recognize this potential for disagreement, understand its implications, and move past it to implement appropriate tools for one’s own situation.
 
Highest-Scoring Indices by Class
     To explore additional candidates for use in occupational settings, a list of the highest-scoring indices in each classification was compiled.  A summary table is provided in Exhibit 2, listing indices that tied for high score in each class.  This subset was then subjected to analysis, similar to that described above, to identify indices that exhibit sufficient potential for supplanting a familiar index to warrant in-depth investigation.  Admittedly superficial, the large number of indices necessitates an abbreviated analysis.  A description of this review, further condensed, follows.
     The leading index in Class A is Thermo-Integrator, despite only scoring 18 points.  Though fully validated (validity score = 5), its other scores are uninspiring; further research was eschewed.
     Class B maxed out at 14 points, where air temperature (Ta) and wet bulb temperature (Twb) tied.  Both are components of WBGT; use of either independently would be injudicious.
     The standout in Class C is Resultant Temperature (RT) or Net Effective Temperature (NET) with a total score of 23.  Its maximum scores for scope and usability make it an attractive option for consideration.  Unfortunately, an English-language version of the defining paper could not be found, ending the inquiry.
     Two indices tied for the top spot in Class D, with 24 points each.  The Index of Physiological Effect (IPhysE or Ep) assesses the level of strain on a points scale, while the Predicted Four-hour Sweat Rate (P4SR) uses liters of sweat produced as index values.  The impracticality of utilizing such indices in occupational settings was discussed in Part 4.
     Class E is topped by two indices with total scores of 25.  The defining paper for Classification of Weather in Moments (CWM) could not be found in English.  The description provided by de Freitas and Grigorieva of the output of this index is “weather types,” dampening enthusiasm for a continued search.  Effective Temperature (ET) is not valid in cold temperatures.
     There is a three-way tie, at 28, at the top of Class F.  Unfortunately, the three leading indices are impractical for use in occupational settings.  Body-atmosphere Energy Exchange Index (BIODEX) requires core temperature monitoring, while Skin Temperature Energy Balance Index (STEBIDEX) requires skin temperature monitoring.  The Subjective Temperature Index (STI) causes concern with its name alone.  It also requires “nonstandard” data and the maximum valid temperature is 40° C (104° F), which excludes settings that are in great need of effective monitoring and controls.
     Atop Class G is a four-way tie at 28 points.  Of the four high-scorers, only the Standard Effective Temperature for Outdoors (OUT_SET*) outputs an equivalent temperature, making it the most-easily understood index.  Its use of “nonstandard” data reduces its usability score and, thus, its practicality for use in occupational settings.  Its focus on outdoor environmental conditions also limits its applicability.
     Bucking the upward trend in high scores, Class H falls back to 26 with a tie between two indices with questionable value in our chosen context.  The Acclimitization Thermal Strain Index (ATSI) is focused on the physiological adaptations required for travel; investigation of the Bioclimatic Distance Index (BDI) was thwarted by another language barrier.  Maximum valid temperatures for both indices are also rather low.  A lack of viable candidates in the special-purpose category is not surprising.

     The search for a thermal index to solve all our problems, based on high scores in the de Freitas and Grigorieva framework, has been rife with disappointment.  The context of use in occupational settings, in all potential permutations, has heightened the challenge.  Changing focus slightly reveals a new subset of indices; these candidates are reviewed next.

Other Candidate Indices
     Widespread familiarity and accessibility of Heat Index (HI) and Wind Chill Temperature (Twc) charts and calculators and the resultant levels of effectiveness they have achieved sets a high bar for any potential replacement.  High total scores in the de Freitas and Grigorieva scheme is an ineffective metric for identifying potential replacements for the HI/Twc duo, as shown in the previous section.  Instead, focus must be narrowed to the characteristics that determine the feasibility of an index in the context of concern – any occupational setting.  In this context, two scoring categories stand out.
     To be considered universal, an index must be valid throughout the range of conditions that may be encountered.  In the de Freitas and Grigorieva scheme, this equates to a scope score of 5.  A single index should eliminate the gap in valid temperatures between Twc [-40 – 10° C (-40 – 50° F)] and HI [20 – 60° C (68 – 140° F)].
     To ensure consistent, reliable application in occupational settings, an index must be simple to implement.  Here, this equates to a usability score of 5.
     Sorting the list according to these criteria yields the set of indices shown in the summary table in Exhibit 3 (previously-reviewed indices are excluded).  These indices were subjected to analysis similar to that described above; a brief review follows.
     The first index listed, Effective Temperature (ETM) includes only two variables in its calculation.  Other indices, including the HI/Twc duo, are often criticized for a lack of comprehensiveness; it would be difficult to argue that popularizing a “new” index with the same shortcoming is worthwhile.
     The next seven indices on the list scored 0 for validity.  Without empirical support, none of these indices is likely to gain traction as a potential replacement for HI or Twc.  Without some level of validation, use of these indices is also too risky when worker well-being is at stake.
     The Class G indices in this list show promise, though most score low on validity; additional work is needed for them to be viable alternatives.  The exception is the Universal Thermal Climate Index (UTCI), which warrants a closer look.  An overview of UTCI is provided below.

The Universal Thermal Climate Index
     Development of the Universal Thermal Climate Index (UTCI) was initiated by the International Society of Biometeorology (ISB) and the European Union COST (Cooperation in Science and Technology) program’s Action 730.  Objectives of the index development project, which involved scientists from 23 countries, included:
  • An index based on advanced thermophysiological models, incorporating all modes of heat transfer between the human body and the surrounding environment.
  • An index capable of predicting whole-body effects (e.g. heat stroke) and local effects (e.g. frostbite).
  • An index valid for all conditions and all “time and spatial scales.”
  • An “apparent temperature” index, where the output is the air temperature of a reference environment that causes the same physiological response as the input conditions.
  • An index that requires minimal computational capacity, allowing rapid, widespread application.
      A visualization of the UTCI calculation procedure is provided in Exhibit 4.  The generic mathematical formulation of the calculation is as follows:
where Ta is air temperature, Tr is radiant temperature, va is wind speed, and pa is vapor pressure (humidity).  Mean radiant temperature (Tmrt or MRT) is typically used for Tr and can be calculated as follows:
where Tg is globe temperature (°C), ε is emissivity of the globe, and D is the diameter of the globe (m).  For a standard globe, such as that typically used for WBGT measurements, where D = 0.150 m and ε = 0.95,
This is an approximation of a more-rigorous calculation accounting for direct, indirect, and reflected solar radiation and infrared radiation from the sky and surroundings.
     Meteorological data typically include wind speeds measured at a height of 10 m (33 ft).  Heat balance models often use a height of 1.1 m (3.6 ft) as the average height for human exposure to wind; other heights are needed in the detailed calculations of the clothing model (e.g. head, upper leg, etc.), discussed below.  Meteorological wind speeds can be converted to an appropriate height for specific calculations according to the following:
where va is wind speed (m/s) at height Z (m), vZr is wind speed (m/s) at height Zr (m) (reference height for meteorological measurements), and Z0 is the “roughness length” at ground level, often assumed to be 0.01 m (0.4 in), representing short grass or a street.
     In addition to the environmental variables, UTCI also incorporates the metabolic rate of heat production and a detailed clothing model.  The clothing model is used to determine whole-body and local insulation values for the head, torso, lower arms, hands, upper and lower legs, and feet.  The vapor resistance of garments and air layers are also determined in the model.  Calculators available for practical application of UTCI do not require direct input of clothing characteristics; the model incorporates typical clothing appropriate for the environment.
     An online calculator is provided at utci.org with a simple interface.  If an offline option is desired, an executable file (“source code”) can also be downloaded from the site.  Repeated calculations in the program’s DOS interface can be tedious, however.
     For more information on the UTCI calculation process, there is a poster, also available on utci.org, that summarizes the operational procedure.  See Exhibit 5 for a preview of the UTCI summary poster.
     Like Heat Index and Wind Chill Index, an output of the UTCI model is a color-coded chart of thermal stress.  A simplified version appears in the visualization in Exhibit 4 and the summary poster in Exhibit 5.  The chart in Exhibit 6 provides additional details of typical physiological responses corresponding to various index temperatures within each stress category.  The “thermal comfort zone” is shown as the upper portion of the “no thermal stress” category, while there is no “slight heat stress” category defined.  The remaining categories – moderate, strong, very strong, and extreme – are mirrored for heat and cold stress.
     Returning to the de Freitas and Grigorieva scoring scheme, UTCI received a total score of 27, scoring the maximum for comprehensiveness, scope, sophistication, and transparency.  In this case, the scope score (first filtering criterion) was certainly deserved, with a cited valid range of -90 – 60° C (-130 – 140° F).
     The second filtering criterion, usability, scored only 3.  However, the rationale for increasing this to 5, acknowledging the availability of simple tools and calculators, once again applies.  This is the reason for its inclusion in the list despite its nominally deficient score.
     In the final category, validity, UTCI scored 4, reflecting the development team’s work comparing UTCI to several other indices.  This brings the revised total score to 29, placing UTCI alone atop the index-scoring hierarchy.  Scores alone, however, will not propel any index to a position of prominence in meteorological or industrial hygiene domains.  If it is to unseat the incumbent HI/Twc duo, there is much for UTCI’s advocates to accomplish.  Continued development of the index and its underlying models and assumptions are important.  Perhaps more difficult will be the education and persuasion of the general public and practitioners in several fields, cultures, and languages, whose motivations and capacities for change differ greatly.

     For additional guidance or assistance with management of thermal environments, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).

References
[Link] “A comprehensive catalogue and classification of human thermal climate indices.”  C.R. de Freitas and E.A Grigorieva.  International Journal of Biometeorology; January 2015.
[Link] “A comparison and appraisal of a comprehensive range of human thermal climate indices.”  C.R. de Freitas and E.A Grigorieva.  International Journal of Biometeorology; March 2017.
[Link] “The Perceived Temperature:  The Method of the Deutscher Wetterdienst for the Assessment of Cold Stress and Heat Load for the Human Body.”  G. Jendritzky, et al. International Society of Biometeorology; 2000.
[Link] “A Universal Scale of Apparent Temperature.”  Robert G. Steadman.  Journal of Applied Meteorology and Climatology; December 1984.
[Link] “The Acclimatization Thermal Strain Index (ATSI): A preliminary study of the methodology applied to climatic conditions of the Russian Far East.”  C.R. de Freitas and E.A Grigorieva.  International Journal of Biometeorology; March 2009.
[Link] “New Indices to Assess Thermal Risks Outdoors.”  Krzystof Blazejczyk.  Environmental Ergonomics XI, Proeedings. of the 11th  International Conference; May 2005.
[Link] “An outdoor thermal comfort index (OUT-SET*) - Part I - The model and its assumptions.”  Richard de Dear and J. Pickup.  Proceedings of the 15th International Congress of Biometeorology and International Conference on Urban Climatology; January 1999.
[Link] "An Outdoor Thermal Comfort Index (OUT_SET*) - Part II – Applications."  Richard de Dear and J. Pickup.  Proceedings of the 15th International Congress of Biometeorology and International Conference on Urban Climatology; January 1999.
[Link] “Thermal Indices and Thermophysiological Modeling for Heat Stress.”  George Havenith and Dusan Fiala.  Comprehensive Physiology; January 2016.
[Link] “Threshold Limit Values for Chemical Substances and Physical Agents.”  American Conference of Governmental Industrial Hygienists (ACGIH); latest edition.
[Link] “UTCI - Universal Thermal Climate Index.”
[Link] “UTCI - why another thermal index?”  Gerd Jendritzky, Richard de Dear, and George Havenith.  International Journal of Biometeorology; December 21, 2011.
[Link] “The Universal Thermal Climate Index UTCI in operational use.”  Peter Bröde, Gerd Jendritzky, Dusan Fiala, and George Havenith.  Proceedings of Conference: Adapting to Change: New Thinking on Comfort Cumberland Lodge; April 2010.
[Link] “Deriving the operational procedure for the Universal Thermal Climate Index (UTCI).”  Peter Bröde, et al.   International Journal of Biometeorology; May 2012.
[Link] “The UTCI-clothing model.”  George Havenith, et al.  International Journal of Biometeorology; May 2012.
[Link] “The Universal Thermal Climate Index UTCI Compared to Ergonomics Standards for Assessing the Thermal Environment.”  Peter Bröde, et al.  Industrial Health; February 2013.
[Link] “Mean radiant temperature.”  Wikipedia.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 9:  Managing Conditions in Cold Environments]]>Wed, 29 Nov 2023 07:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-9-managing-conditions-in-cold-environments     Throughout the range of possible workplace temperatures, safeguarding the health and well-being of employees is paramount.  Despite equal importance, the development of a coordinated program to prevent cold injury receives much less attention than its heat-related counterpart.
     An effective cold injury prevention program consists of the same components as a heat illness prevention program.  These include the measures used in environmental assessment, exposure limits, policies and procedures, training plans, program assessment processes, and other information relevant to work in a cold environment.  Like its heat-related counterpart, this is nominally a prevention program; however, information about the proper response to the occurrence of cold injury, such as first aid practices, is also included.
     Given the similar natures of the heat- and cold-related programs, it should come as no surprise that this installment of the “Thermal Work Environments” series parallels that of “Part 5:  Managing Conditions in Hot Environments.”  In the outline for a cold injury prevention program that emerges, cold stress hygiene and various control mechanisms are introduced.  This outline can be customized to the specific needs of an organization or workplace.
     The content of a cold injury prevention program is presented in five (5) sections:
  • Training
  • Hazard Assessment
  • Controls
  • Monitoring
  • Response Plans
To reiterate, the information presented here is only an overview.  An exhaustive treatment is not feasible in this format, given the range of potential workplace scenarios that exists.  Instead, it is intended to identify avenues of inquiry to be explored in the context relevant to developers of a specific program.

Training
     Every person that works in or has responsibility for a cold workplace should be trained on the ramifications of overexposure to cold conditions.  An effective training program includes information discussed in the following four sections.  Topics important to all team members include:
  • basics of human biometeorology and heat balance,
  • environmental, personal, and behavioral risk factors,
  • methods used to monitor conditions,
  • controls in place to prevent cold injury,
  • signs and symptoms of cold injury, and
  • first aid and emergency response procedures.
Training of supervisors and team leaders should emphasize proper use of controls, signs and symptoms, and appropriate responses to cold injury.
     A complete training plan includes the content of the training and a schedule for delivery.  It may be best to distribute a large amount of information among multiple modules rather than share it in a single, long presentation.  Refresher courses of reduced duration and intensity should also be planned to combat complacency and to update information as needed.  Refreshers are particularly helpful when dangerous conditions exist intermittently or are seasonal.
 
Hazard Assessment
     An initial hazard assessment consists of identifying the elements of job design (see Part 1) that are cold-related.  These include site-specific environmental factors, such as:
  • atmospheric conditions (e.g. temperature, humidity, sun exposure),
  • air movement (natural or forced), and
  • precipitation (i.e. rain or snow that can wet clothing or exposed skin).
Job-specific attributes are also documented.  These may include:
  • intensity of work (i.e. strenuousness and rate),
  • dexterity requirements of work,
  • tools and materials to be handled,
  • personal protective equipment (PPE) and other gear required, and
  • access to food, water, and a warm recovery area.
As many relevant factors as possible should be identified.  Special attention must be paid to compounding risks.  For example, a task requiring high dexterity in the presence of fluids compounds the risk of exposed skin (gloves limit dexterity) with the risk of wetted skin and clothing.
     The information collected in the hazard assessment is used to create a risk profile for each task or logically-grouped series of tasks.  Development of controls and modifications of job design are prioritized according to the risk profiles generated.
 
Controls
     Readers are reminded that there is a hierarchy of controls that can be implemented to address a hazard.  The hierarchy is represented by an inverted pyramid, as shown in Exhibit 1.  The most-effective types of controlselimination and substitution – are not realistic options in many thermal environments.  Repair of a bridge or other structure cannot be postponed until spring and a construction site cannot be relocated to a more hospitable environment.
     Therefore, the focus of this presentation is, necessarily, on the remainder of the hierarchy which represents feasible protection opportunities.  Engineering controls modify the tasks performed, equipment used, or the operating environment.  Administrative controls reduce cold injury by guiding workers’ behavior.  Finally, PPE is used to manage cold stress not eliminated by other measures.
     A comprehensive cold injury prevention program considers every term in the heat balance equation (see Part 6), developing appropriate mitigations for each.  Examples of engineering controls include:
  • Heaters of various types – area heaters warm the entire workspace; spot or infrared (IR) heaters warm the worker without significantly heating the surroundings; chemical heat packs and electrically-heated garments provide a heat source that moves with the person.
  • Wind barriers reduce the wind chill effect.
  • Tool covers minimize conductive heat loss (K) by covering metals with lower-conductivity material, such as plastic or rubber.
  • Splash/spray guards limit contact with fluids, minimizing evaporative heat loss (E) and limiting loss of insulation capability of clothing.
  • Equipment designed with large buttons, knobs, dials, etc. facilitate use while wearing gloves or mittens.
  • Use of alternative control devices, such as a stylus for a touchscreen, avoid removal of protective gear for routine tasks.
  • Use of lift assists or other physical aids maintain work rate with minimum risk of sweating (i.e. stabilize metabolic heat generation, M).
Administrative control examples include:
  • Implementing a “buddy system” allows two (or more) people to effectively manage their work rates (minimize sweating, etc.) by sharing work and provide each other constant monitoring for signs of cold injury.
  • Developing a balanced work cycle avoids periods of intense effort followed by periods of very low intensity.  Long periods of sitting or standing are also avoided.
  • Implementing an acclimatization program provides an adjustment period to new or returning workers.  While there is consensus that heat acclimatization is effective in preventing many heat illnesses, the same does not appear to be true for cold acclimatization.  It is recommended, nonetheless, because evidence suggests that cold acclimatization does, in fact, occur to some degree, improving safety.
  • Implementing a work/warm-up schedule limits the duration of exposure.  The schedule shown in Exhibit 2 has been cited in Canadian provincial regulations and as ACGIH TLVs.  Exhibit 3 provides a visual representation of a 4-hour work cycle in the no-wind condition.  The supplement shown in Exhibit 4 combines the two styles of representation in a single reference.  All work/warm-up schedules assume that workers’ clothing is dry.
  • Encouraging consumption of snacks and warm drinks throughout the workday helps workers maintain energy and hydration.
     Cold-related PPE includes all garments worn for insulation, wind protection, or waterproofing.  This includes insulated boots, goggles, gloves, hats, neck gaiters or scarves, balaclavas, etc.
     Clothing should be worn in loose-fitting layers.  Multiple layers provide improved insulation performance and facilitate adjustment as needs change.  The innermost layers should be capable of wicking moisture away from the skin.  Garments made of synthetic fibers are often used for this purpose; cotton is not recommended, as it wets easily, reducing its insulating capability.
     A waterproof outermost layer or windbreaker may be needed, depending on conditions.  Adjustable closures (waist, neck, arms) and vents (armpits, etc.) allow the wearer to accommodate a wider range of conditions before changing is necessary.  Intermediate layers must be selected in accordance with environmental conditions and work performed (e.g. work rate or intensity).
     It is important to remember that clothing that is wet, dirty, or compressed loses insulation capacity.  This is particularly important to remember in regards to socks.  Thick, insulated socks may be counterproductive if their bulk causes boots to be tight-fitting.  Restricted circulation and impaired insulation accelerates the onset of trench foot and/or frostbite.

     Controls used in conjunction must be evaluated to ensure suitability of the combination.  For example, a well-balanced work cycle may render heated garments excessive and, therefore, counterproductive.  Likewise, using a physical aid to move a light load may cause a problem; effort is reduced, lowering M, while heat loss (K) in the extremities may be increased via physical contact with the equipment.  Another example is the construction of an effective wind barrier that, in addition to its direct benefit, precludes the need for a splash guard that hinders task performance.  Sometimes less is more.
 
Monitoring
     Monitoring is a multifaceted activity and responsibility.  In addition to measuring environmental variables, the effectiveness of controls and the well-being of workers must be continually assessed.  A monitoring plan includes descriptions of the methods used to accomplish each.
     Measurement of environmental variables is the subject of Part 8 of this series.  As discussed in that installment, decisions regarding work cycle modifications or stoppages is often based on wind chill calculations.  Though imperfect, it is a useful guide that provides early warnings that additional precautions may be needed to protect workers during particularly dangerous periods.  In addition to providing wind chill charts, the National Weather Service (NWS) issues advisories when dangerous conditions are forecast.
     After controls are implemented, they must be monitored for proper use and continued effectiveness.  This should be done on an ongoing basis, though a formal report may be issued only at specified intervals (e.g. quarterly) or during specific events (e.g. modification of a control).  Verification test procedures should be included in the monitoring plan to maintain consistency of tests and efficacy of controls.
     Monitoring the well-being of workers is a responsibility shared by a worker’s team and medical professionals.  Prior to working in a cold environment, each worker should be evaluated on his/her overall health and underlying risk factors for cold injury.  An established baseline facilitates monitoring a worker’s condition over time, including the effectiveness of acclimatization procedures and behavioral changes.
     Suggestions for behavioral changes, or “lifestyle choices,” can be made to reduce a worker’s risk; these include diet, exercise, consumption of alcohol or other substances, and other activities.  Recommendations to an employer regarding one’s fitness for certain duties, for example, must be made in such a way that protects both safety and privacy.  Cold-related issues may be best addressed as one component of a holistic wellness program such as those established by partnerships between employers, insurers, and healthcare providers.
 
Response Plans
     There are three (3) response plans that should be included in a cold injury prevention program.  Like heat-related response plans (see Part 5), two of them are concerned with cold injury that was not prevented.
     The first response plan details the provisioning of first aid and subsequent medical care when needed.  Refer to Part 7 for an introduction to cold injuries and first aid.
     The second outlines the investigation required when a serious cold injury or cold-related accident occurs.  The questions it must answer include:
  • Were defined controls functioning and in proper use?
  • Had the individual(s) involved received medical screening and been cleared for work?
  • Had recommendations from prescreens been followed by individual(s) and the organization?
  • Had the individual(s) been properly acclimatized?
  • Were special circumstances involved (e.g. wind chill advisory, emergency situation, etc.)?
The investigation is intended to reveal necessary modifications to the program to prevent future cold injuries and related accidents.
     The final response plan needed defines the review process for the cold injury prevention program.  This includes the review frequency, events that trigger additional scrutiny and revision, and required approvals.


     Currently, management of cold work environments is governed by the “General Duty Clause” of the Occupational Safety and Health Act of 1970.  The General Duty Clause provides umbrella protections for hazards that are not explicitly detailed elsewhere in the regulations.  It is a generic statement of intent that provides no specific guidance for assessment of hazards or management of risks.
     Though OSHA has issued an “advance notice of proposed rulemaking” (ANPRM) to formalize heat-related safety regulations and launched a National Emphasis Program for heat-related hazards, no counterpart for cold conditions has yet been publicized nor is cold stress addressed in the OSHA Technical Manual; in fact, the major players in US industrial hygiene (OSHA, NIOSH, ACGIH) do not prescribe a cold injury prevention program.
     That a standard promulgated by OSHA or other prominent organization will reduce illness and injury in thermal work environments is a reasonable expectation.  However, it must be recognized that it, too, is imperfect.  No standard or guideline can account for every person’s unique experience of his/her environment; therefore, an individual’s perceptions and expressions of his/her condition (i.e. comfort and well-being) should not be ignored.  A culture of autonomy, or “self-determination,” where workers are self-paced, or retain other responsibility for thermal stress hygiene, is one of the most powerful tools available for safety and health management.
 
 
     For additional guidance or assistance with complying with OSHA regulations, developing a cold injury prevention program, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “Threshold Limit Values for Chemical Substances and Physical Agents.”  American Conference of Governmental Industrial Hygienists (ACGIH); latest edition.
[Link] “Thermal Environment.”  Student Manual, Occupational Hygiene Training Association; February 2016.
[Link] “Fire and Ice: Protecting Workers in Extreme Temperatures.”  Donald J. Garvey.  Professional Safety; September 2017.
[Link] “Hierarchy of Controls.”  NIOSH; January 17, 2023.
[Link] “Preventing Cold-related Illness, Injury & Death among Workers.”  NIOSH Publication No. 2019-113; September 2019.
[Link] “Cold Environments - Working in the Cold.”  Canadian Centre for Occupational Health and Safety (CCOHS); June 13, 2023.
[Link] “Working in Cold Conditions Fact sheet.”  Canadian Centre for Occupational Health and Safety (CCOHS); October 28, 2022.
[Link] “Staying safe in the cold - & tips for safety pros and workers.”  Barry Bottino.  Safety + Health; January 29, 2023.
[Link] “Working in the cold – Stay safe when temperatures drop.”  Alan Ferguson.  Safety + Health; November 22, 2020.
[Link] “Cold Stress and its Safety Measures.”  OSHA Outreach Courses: July 30, 2021.
[Link] “Recommendations to Improve Employee Thermal Comfort When Working in 40°F Refrigerated Cold Rooms.”  Diana Ceballos, Kenneth Mead, and Jessica Ramsey.  Journal of Occupational and Environmental Hygiene; August 17, 2015.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 8:  A Measure of Comfort in Cold Environments]]>Wed, 15 Nov 2023 07:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-8-a-measure-of-comfort-in-cold-environments     Development of effective cold stress indices has garnered significantly less attention than that of heat stress indices (see Part 4).  Perhaps this is explained, at least in part, by the lesser threat to life posed by cold stress, as explained in Part 7.  Whatever the reason, this difference does not indicate lesser importanceCold stress and cold injuries are serious conditions that effect workers in many ways and have both short- and long-term consequences.  Monitoring environmental conditions and worker well-being is as critical a responsibility in cold environments as it is in hot ones.
      This installment of the “Thermal Work Environments” series parallels the discussion in Part 4, beginning with a widely-reported, if not widely-understood, index used in weather forecasting, followed by a discussion of application in industrial settings.  Readers are encouraged to review the discussions of heat and cold indices in conjunction.
Popular Meteorology
     Several independent weather-forecasting organizations have developed versions of “feels like” temperature indices to convey the level of discomfort one can expect to experience in cold, windy conditions.  Others simply defer to the National Weather Service (NWS) in the US or the Canadian Meteorological Service (MSC) in Canada, using the “New Improved Wind Chill Index” developed by an international consortium.  This is a logical choice, as these national agencies are generally recognized as the experts in weather-related matters in North America.  Outdated references may be encountered in resources that remain readily-available, such as websites and journal articles, however.
     The likelihood of encountering obsolete material warrants a brief review of the history of the development of wind chill as a comfort index.  Understanding input variables and calculation methods makes the variety of wind chill indices one may encounter more meaningful.
     The concept of wind chill, as we know it today, originated in the Antarctic in 1945.  Two explorers, Paul Siple and Charles Passel, measured the time required for a container of water to freeze at various temperatures and wind speeds.  From the data gathered, Siple and Passel derived a formula for the rate of heat loss in cold, windy conditions:
where H is “wind chill” or rate of heat loss (kcal/m^2/hr), v is wind speed (m/s) and T is air temperature (°C).
     This heat loss rate has little meaning outside the research community; therefore, conversion to a recognizable form is needed for practical use.  This seminal work’s greatest contribution has been to inspire development of better indices.
     To this end, further experiments were conducted, ultimately resulting in a revised formula for wind chill:
where WCI is the Wind Chill Index (W/m^2), v is wind speed (m/s), and T is ambient temperature (°C).  WCI is converted to an apparent temperature with the following relation:
where Tch is the equivalent chilling temperature (°C) and WCI is the Wind Chill Index (W/m^2) calculated above.
 
     In 1992, NWS published a wind chill index that came into wide use and public familiarity.  This formulation calculates an apparent temperature, in a single step, according to the following:
where Twc is the apparent or “wind chill temperature” (°F), T is ambient temperature (°F), and v is wind speed (mph).  Published wind chill tables are convenient; their widespread use makes them the most-likely references to this obsolete formulation to be encountered.
     In 1998, Robert Quayle and Robert Steadman advocated for the Steadman Wind Chill to replace the existing index.  Deficiencies of the NWS wind chill index cited by Quayle and Steadman include:
  • Data from water-freezing experiments are not representative of human physiology or behavior.
  • Wind speeds below 5 mph are assumed to have no cooling effect and may even have a warming effect.
  • Wind speeds above 40 mph are assumed to contribute no additional cooling effect.
  • The resultant wind chill values are lower than apparent temperatures actually experienced by humans in given conditions.
     To compensate for these deficiencies, the Steadman Wind Chill equation was developed:
where TSF is the Steadman wind chill equivalent temperature (°F), v is wind speed (mph), and T is ambient temperature (°F).  An alternate formulation, for use with metric units, was also developed:
where TSC is the Steadman wind chill equivalent temperature (°C), v is wind speed (m/s), and T is ambient temperature (°C).
 
     The broader meteorological community was clearly aware of deficiencies in the existing wind chill index when, in 2000, an international consortium convened to update it.  This effort resulted in “The New Improved Wind Chill Index.”  The updated index was adopted by NWS and MSC in 2001 and continues to be cited in meteorological reports and forecasts across North America.  In the US:
where Twc is the wind chill equivalent temperature (°F), T is ambient temperature (°F), and v is wind speed (mph).  In Canada:
where Twc is the wind chill equivalent temperature (°C), T is ambient temperature (°C), and v is wind speed (km/hr or kph).
     Though NWS and MSC did not simply adopt the Steadman equations for their improved indices, results are in much-greater alignment with Steadman’s than those of previous iterations.  A comparison of wind chill equivalent temperature calculations is shown in Exhibit 1 for two hypothetical conditions.  As conditions become extreme (i.e. very low temperature and high wind speed), discrepancies among the wind chill equivalent temperature calculations become more pronounced.  The modern indices better reflect human physiology – the basis of Steadman’s arguments in the 1990s.  Readers are encouraged to compare these values to those obtained by interpolating from wind chill index charts.
     A wind chill index table remains the most-convenient resource, as precision is often unnecessary; the variability of human experience typically exceeds the error inherent in interpolation of tabulated values.  NWS, MSC, and other national and independent organizations publish such tables.  Going a step further, the supplement to this post, shown in Exhibit 2, provides a single reference for use with either US or metric units.  Converted values are also included in the supplement tables to facilitate approximation of wind chill values when available measurements are in mixed units.

Industrial Application

     Wind chill equivalent temperatures are very useful for outdoor settings; however, significant shortcomings render them much less helpful in most indoor settings.  Wind chill and heat index (see Part 4) are subject to a similar criticism:  only two factors are included in calculations.
     Wind chill calculations neglect the influence of radiation, though exposure to direct sunlight can increase the apparent temperature by 8 – 15° F (5 – 9° C).  This can be accounted for with a wet bulb globe temperature (WBGT) measurement in hot conditions (see Part 4).  However, most sources also conclude that humidity is not relevant to cold stress.  This leaves only the dry bulb (ambient) temperature with no adjustment.
     The ambient temperature alone may be sufficient for many indoor workplaces, where the air is calm (i.e. no appreciable air movement).  In others, such as large freezer facilities, where significant air flows are necessary, the wind chill index provides a better assessment of conditions.
 
     The required clothing insulation (IREQ) index incorporates aspects of human physiology to determine proper clothing for the existing work conditions.  Two versions of the index have been defined:
  • IREQmin establishes the clothing insulation required to limit body temperature reduction to 97° F (36° C).  This is the maximum heat loss deemed safe for workers in cold conditions.
  • IREQneutral defines the clothing insulation required to maintain normal body temperature.  This is associated with comfort, as only minimal cooling occurs.
     Clothing outside the range from IREQmin to IREQneutral requires additional attention.  Below IREQmin, work must be time-limited to prevent excessive heat loss.  Above IREQneutral, a worker becomes subject to the risks associated with clothing dampened by sweat (see Part 7).
     Determining IREQ values requires complex calculations involving the heat balance equation (see Part 6).  Therefore, these indices provide more value as conceptualizations of conditions than in direct application.  The chart in Exhibit 3 visually reinforces the idea behind IREQ, though the index values remain abstract.  Instituting the IREQ indices fully requires data collection and computation that are impractical in most occupational settings and, therefore, beyond the scope of this treatise.

The Conclusion

     While several practical options exist to assess potential heat stress (see Part 4), cold stress options are much more limited.  In fact, simplicity being key to practical application in occupational settings renders all but wind chill infeasible.  Some choice does remain, however.  Wind chill should be cited in the scale (°F or °C) most familiar to the group effected.  This may define the equation used or table referenced to determine the index.  Though a “conservative” index may invite criticism for overprotection, an extra margin of safety reduces the risk inherent in the variability of human experience.  Ultimately, it is individual experience, as subjective and unreliable as it may be, that must be the determining factor in many workplace decisions.
 

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “Thermal Environment.”  Student Manual, Occupational Hygiene Training Association; February 2016.
[Link] “Understanding Wind Chill.”  University of Kentucky Weather Center.
[Link] “The Ridiculous History of Wind Chill.”  Rachel Z. Arndt.  Popular Mechanics; December 12, 2016.
[Link] “The Steadman Wind Chill:  An Improvement over Present Scales.”  Robert G. Quayle and Robert G. Steadman.  Weather and Forecasting; December 1, 1998.
[Link] “The New Improved Wind Chill Index.”  National Weather Service; November 1, 2001.
[Link] “Wind chill – text version.”  Government of Canada; June 14, 2022.
[Link] “Wind Chill Calculator (Celsius).”  CalcuNation; 2022.
[Link] “Calculate the wind chill.”  Lenntech.
[Link] “Wind chill.”  Wikipedia; July 13, 2023.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 7:  Cold Injury and Other Cold-Related Effects]]>Wed, 01 Nov 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-7-cold-injury-and-other-cold-related-effects     Loss of heat balance in a cold environment leads to cold injury, an umbrella term for several afflictions, of varying severity, resulting from overexposure to low temperatures.  Recognizing symptoms of cold injuries is critical to timely treatment and successful recovery.
     This installment is a companion to Part 3 (“Heat Illness”) of the “Thermal Work Environments” series, in which a range of cold-related effects and injuries are presented.  The objective of this discussion is to raise awareness of the risks of working in cold environments and the severity of potential outcomes.  These are serious conditions, all but the mildest of which require medical attention from trained healthcare professionals.
Cold Injury
     The following descriptions of cold injuries include information to aid in their identification and understanding of proper treatment.  Minor issues can often be resolved by the affected individual or nearby coworkers.  Though information is shared regarding identification and treatment of cold injuries and related effects, it should not be construed as “medical advice.”  As the severity of injury increases, the more critical professional medical care becomes to survival and recovery.
     The ensuing presentation begins with the least-severe issue and progresses to the most-severe at its conclusion.  The intervening sequence, however, is not a reliable reflection of the relative severity of all occurrences of injury; individuals’ circumstances, sensitivities, and, therefore, experience of injuries differ.  Also, the absence of recognizable symptoms of “low-level” injury does not preclude the onset of a serious condition.  All signs of cold stress must be acknowledged and treated accordingly.
Cold Discomfort
     Discomfort is not an injury, but is, nonetheless, worth noting at the outset.  It is often the first warning or reminder that cold stress is an important element of the workplace that requires attention and proper management.  Discomfort is a common precursor to more-serious injury.
Chilblains
     A chilblain is a swelling of a foot or hand; ears and cheeks may exhibit a similar condition.  It is characterized by redness, itching, and pain.  Bare skin can develop chilblains with repeated exposure to temperatures below 60° F (16° C).  Permanent damage increases susceptibility to recurrence of redness and itching upon subsequent exposure.  Keeping affected areas warm and dry is the default treatment.
Frozen Cornea
     The combination of low temperature, strong wind, and the absence of eye protection (i.e. goggles) can result in frozen cornea.  Treatment usually consists of warming the closed eye with one’s hand or a warm compress, followed by 24 – 48 hours of complete coverage with an eye patch.
Trench Foot
     Trench foot is caused by exposure to cold, wet conditions.  Wet feet suffer from accelerated heat loss, with increased risk of trench foot in temperatures as high as 59° F (15° C).  The hypothalamus responds to the increased heat loss by restricting circulation to the feet, causing them to become cold and numb.  As the condition progresses, hot, shooting pain may be experienced, with swelling, redness, and blisters appearing.
     Tissue damage caused by reduced circulation becomes permanent after approximately 6 hours of exposure with vasoconstriction.  Tissue is damaged further by walking, as it is soft and weak.  After 24 hours of exposure with vasoconstriction, amputation may be necessary.
     Treatment of trench foot is limited to gentle warming and drying and slight elevation of the feet.  Use of over-the-counter pain medication and bed rest (i.e. no walking) are common during the recovery period.
     To prevent trench foot, waterproof insulated boots that are not constricting (tight fitting) should be worn.  Socks should be changed when they become damp and the inside of the boots should be dried regularly (e.g. overnight).
     This condition is also called “immersion foot.”  A similar condition can develop in the hands; the same cause, treatment, and prevention principles are applicable.  Generalizing, this type of injury can be called cold-immersion injury.
Frostnip
     Freezing of the top layers of skin tissue is called frostnip; it is most common in the cheeks, earlobes, fingers, and toes.  It is characterized by numbness, white, waxy appearance, and a hard, rubbery feeling of the skin while the tissue underneath remains soft.  Frostnip is usually reversible with gentle warming; rubbing effected areas should be avoided as this can damage the frozen tissue.
Frostbite
     Freezing that extends through all layers of skin is called superficial frostbite; deep frostbite includes freezing underlying tissue, such as muscle, and can extend into bone.  The extremities – fingers, toes, nose, ears, etc. – are most susceptible to frostbite.  Skin in frostbitten areas is white, with a “wooden” feel, and may develop a bluish hue.  Numbness and stinging are also common symptoms.
     Superficial frostbite is treated similarly to frostnip.  Ice crystals that form in the skin make the tissue susceptible to damage; rubbing or other stress on effected tissue must be avoided.  Treatment of deep frostbite introduces additional risks and is best left to medical professionals whenever possible.  Areas of deep frostbite should not be warmed until the victim is safe from potential refreezing.  Refreezing of frostbitten areas can result in damage and loss of tissue in excess of that caused by the initial frostbite.
     Warming is accomplished in a water bath maintained at 105 – 110 ° F (41 – 43° C).  Dry heat can cause burns and should not be used.  When thawing is complete, the water bath is discontinued and effected areas are wrapped in gauze, separated (i.e. fingers, toes), and immobilized.  Attempting to use rewarmed body parts can cause further damage.
Hypothermia
     A core temperature below one’s normal (diurnal) range is termed hypothermiaHypothermia advances from mild to moderate to severe as core temperature drops.  The temperature ranges that characterize each level of severity differ among sources; there is greater agreement on the progression of symptoms.  A summary of this progression and one possible temperature range breakdown are shown in Exhibit 1.
     One possible element of a “field diagnosis” of hypothermia is “the –umbles.”  If observations of a person’s behavior include “stumbles, mumbles, fumbles, and grumbles,” a closer look for other signs of hypothermia is warranted.  This assumes, of course, that these observations represent a deviation from the person’s normal behavior.  Severity of the –umbles typically correlates with that of the hypothermia of which it is symptomatic.
     Significant physiological changes can occur during the earliest stages of hypothermiaMild hypothermia can cause vasoconstriction, limiting circulation to the extremities, loss of fine motor skills (“fumbles”) and shivering.  Onset can occur in ambient temperatures as high as 50° F (10° C).
     Fine motor skills continue to degrade in moderate hypothermia; tasks such as zipping a coat can become very difficult or impossible.  Shivering intensifies and can become uncontrollable.  Slurred speech (“mumbles”) and irrational behavior (“grumbles”) also manifest at this stage.
     When hypothermia becomes severe, shivering becomes intermittent, then ceases.  The person loses his/her ability to walk (“stumbles”) and becomes stiff.  Pulse and respiration rates decline and the person loses consciousness; cardiac arrest may be induced.  Severe hypothermia brings a person to the brink of death; emergency medical care is critical to survival.

     Recovery from mild hypothermia is fairly straightforward.  Increasing physical activity increases the metabolic rate of heat generation, offsetting heat loss.  Moving to a warm shelter, removing wet clothing and replacing with additional dry layers, if necessary, may be sufficient to rebound from a mild case of hypothermia.
     The techniques for treating mild hypothermia are also applicable to moderate cases.  However, as a case of hypothermia worsens, the response must be scaled accordingly.  The human body requires fuel to generate heat; carbohydrate-rich foods provide the fastest conversion to energy.  Proteins are converted more slowly, over a longer period.  Fats are also converted slowly over a long period, but more water and energy are consumed in the conversion.  In short, carbohydrates facilitate recovery, while proteins and fats (to a lesser degree, with sufficient hydration) are better for long-term sustenance.
     Warm to hot (but not too hot) drinks are very beneficial.  They can provide immediate heat, an energy source (calories), and hydration simultaneously.  Alcohol and caffeine should be avoided because their consumption causes counterproductive physiological responses.
     If increased physical activity and warm shelter are insufficient, or unavailable, an external heat source may be needed.  A nearby fire or heater can warm the person and dry his/her clothing before redressing.  Hot water bottles, chemical heat packs, or similar heat source applied to the neck, armpits, and groin effectively warm the core.  Another person can also serve as an external heat source, provided that person is not also experiencing a heat deficit (i.e. s/he is normothermic).
     If the victim is able to drink, s/he should be given warm sugar water.  In severe conditions, the digestive system is incapable of processing solid food; a sugar mixture provides fuel the body needs to generate heat in a form it can process.  A gelatin dessert mix can also be used; the combination of sugar and protein provides fast- and slow-release energy.  Any drink must be dilute for the body to convert it to energy.
     A severe case may require a hypothermia wrap and transport to a medical facility.  Multiple blankets and sleeping bags can be used to create the wrap.  It is imperative that the victim and the wrap remain dry; this may require a wicking layer next to the skin and a waterproof outer layer.
     The heart is the organ most vulnerable to functional disruption in cold conditions.  The combined stress of hypothermia and physical shocks, such as those caused by being moved or carried, can induce cardiac fibrillation.  Performing CPR can also hasten the death it is intended to prevent because of the heart’s hypersensitivity in these conditions.
     “Rescue breathing” is the practice of a normothermic person gently blowing warm air into the victim’s mouth.  The pre-warmed air reduces respirational heat loss; it may also add oxygen needed to metabolize sugar and generate heat.
     To reiterate, severe hypothermia is a life-threatening condition; any missteps during treatment can hasten death.  Seek emergency medical care.
Afterdrop
     Afterdrop is a dangerous drop in core temperature that occurs while rewarming a victim of hypothermia.  It is caused by vasodilation in the extremities allowing very cold blood to return to the core.  Blood stagnated in the arms and legs also becomes acetic; upon recirculation, it may cause the sensitive heart to become arrhythmic.
     Prevention of afterdrop requires a carefully-controlled warming process; only the core should be warmed.  Exposure to extreme heat, such as moving into a hot room, can cause superficial warming of the extremities that initiates vasodilation and recirculation before the core is warm enough to tolerate it.
Death
     Recovery from a core temperature below 77° F (25° C) would be miraculous; death is a near-certainty.  A victim may lose consciousness and exhibit pulse and respiratory rates so low that they are difficult to detect, causing a premature declaration of death.  Entering such a state is the body’s final attempt at survival, reducing energy expenditure to its absolute minimum.

Other Cold-Related Effects
     Commonly-experienced effects of exposure to cold, such as numbness, redness, and stinging upon rewarming occur with such regularity that little attention is paid to them.  Many do not consider these to be cold injuries; it is only upon increased severity that they take note.  This is unfortunate, as proper attention to all occurrences of potential injury is key to the prevention of severe injury.
     Manual dexterity and flexibility are reduced with exposure to temperatures as high as 59° F (15° C).  A one-hour exposure at 45° F (7° C) can cause as much as a 20% loss of dexterity.  Continued exposure can reduce blood flow to the fingers to as little as 2% of normal.  Mild shivering exacerbates the loss of fine motor skill.  When shivering becomes severe, or violent, even coarse motor control becomes extremely difficult. The impact on task performance is intuitive; however, the potential for increased accident rates may be less so.
     Cognitive ability and psychomotor function, such as the ability to skillfully operate tools, also decline in cold environments.  The connection to safety may be more obvious here, though the extent of the decline may be surprising.  When core body temperature drops by as little as 7° F (4° C), a person loses the ability to make life-saving (“fight or flight”) decisions.  Core temperature may drop another 10° F (6° C) or more before the person loses consciousness; in the interim, a person can put him/herself in much greater peril with poor decision-making.
     Maximum vasoconstriction (i.e. minimum blood flow) in the extremities occurs at approximately 59° F (15° C).  If further cooled, to approximately 50° F (10° C), alternating periods of vasodilation and vasoconstriction begin.  This cold-induced vasodilation (CIVD) occurs in 5 – 10-minute cycles, providing some protection against cold injury via periodic rewarming of the extremities.  This phenomenon has been observed, but not fully explained; it remains unclear when this vascular behavior ceases and precisely why it occurs.
     Cold exposure can also modify the body’s response to heat upon rewarming.  During cold exposure, the sweating mechanism is disabled; upon rewarming, an increased threshold temperature and latent period delay the onset of sweating during subsequent heat exposure.  This shifting response reinforces the need for an acclimatization period, particularly for those exposed to both high- and low-temperature environments.

     Preparation for exposure and recovery between exposures is equally important in hot and cold environments.  The following cold-recovery guidelines are similar to those presented in Part 3 for heat exposure:
  • Spend the “downtime” in a warm, dry environment.
  • Tailor diet to the energy needs of the cold environment – proteins and fats for extended energy release and carbohydrates for quicker conversion to energy.
  • Replenish fluids.  Converting food to life-saving heat requires water.  Fluid loss is easily overlooked in a cold environment, but hydration must be given proper attention.  Alcoholic and caffeinated beverages should be avoided.
  • Use the time to verify that plenty of warm, dry layers are available for the next work shift, including hats, boots, and gloves.  Other gear, such as goggles, should also be checked to ensure proper protection will be provided.
  • Get plenty of rest.  Working in cold conditions is energy-intensive; the body needs time to recover and “rebuild.”
 
Risk Factors
     Several factors affect the risk one assumes when exposing him/herself to cold conditions.  A person’s overall condition, or general health, provides the baseline assessment.  In general, the better one’s physical fitness, the greater his/her resistance to cold injury.
     Specific health issues of concern include heart conditions and previous cardiac events (i.e. heart attacks).  As the heart is most susceptible to disruption by cold, any “imperfection” can become a significant liability as conditions degrade.
     Previous exposures, particularly overexposures, can reduce a person’s tolerance for cold conditions.  Previously-damaged tissue is susceptible to re-injury; each occurrence tends to be worse than the previous.
     Perhaps the greatest risk in cold conditions is overconfidence.  Overconfidence increases exposure unnecessarily when one convinces him/herself that additional precautions are excessive.  There are several workplace factors in which one might be overconfident, including:
  • One’s personal fortitude (i.e. ability to maintain efficacy in cold conditions; ego).
  • Performance capability of selected clothing and gear.
  • Stability of conditions during the planned work period (e.g. temperature, wind).
  • Stability of schedule (i.e. ability to complete tasks in allotted time).
 
     As a person’s condition deteriorates – hypothermia deepens – s/he becomes less aware of his/her condition and endangerment.  This makes coworkers that are aware of signs of cold injury in others critical to a team’s safety.  Individuals’ susceptibility to conditions varies widely; these are not always known in advance.  The ability to recognize changes in a person’s behavior or physical condition and respond accordingly is paramount.


     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).

References
[Link] Human Factors in Technology.  Edward Bennett, James Degan, Joseph Spiegel (eds).  McGraw-Hill Book Company, Inc., 1963.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “Hypothermia and Cold Weather Injuries.”  Rick Curtis.  Princeton University Outdoor Action Program, 1995.
[Link] “Fire and Ice: Protecting Workers in Extreme Temperatures.”  Donald J. Garvey.  Professional Safety; September 2017.
[Link] “Cold Weather Exposure.”  Agricultural Safety and Health Program, Ohio State University Extension; May 17, 2019.
[Link] “Cold Stress – Cold Related Illnesses.”  National Institute for Occupational Safety and Health; June 6, 2018.
[Link] “Effect of body temperature on cold induced vasodilation.”  Andreas D. Flouris, David A. Westwood, Igor B. Mekjavic, and Stephen S. Cheung.  European Journal of Applied Physiology; June 21, 2008.
[Link] “Influence of thermal balance on cold-induced vasodilation.”  Andreas D. Flouris and Stephen S. Cheung.  Journal of Applied Physiology; April 2009.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 6:  Thermoregulation in Cold Environments]]>Wed, 18 Oct 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-6-thermoregulation-in-cold-environments     Many of the human body’s responses to cold mirror those initiated by exposure to heat.  Others are unique physiological mechanisms engaged to pursue diametrically-opposed objectives.  The risks associated with cold stress are very different from those of heat stress, requiring unique forms of strain for proper and effective management.
     This installment parallels Part 2 of the “Thermal Work Environments” series, providing an overview of thermoregulatory functions activated by cold stress.  The heat balance equation is also revisited, discussing each term in the context of cold environments.  These two installments are “companion pieces;” each can stand alone, but are most helpful when reviewed in conjunction.
Thermoregulatory Function
     The hypothalamus (see Exhibit 1) is responsible for regulation of core body temperature.  Heat-retention functions, such as vasoconstriction and shivering, are managed by the posterior hypothalamus.
     Vasoconstriction reduces blood flow to the outer regions of the body; heat is retained in the body’s core.  As core temperature drops, the heart rate decreases to further limit heat loss.
     Shivering is an involuntary response, engaged to offset heat loss.  A person can postpone the onset of shivering through force of will, but doing so could exacerbate an already-dangerous situation.  The onset of shivering should be treated as a warning; a warm refuge should be sought.
 
Heat Balance
     The objective of homeothermy requires viewing the body’s heat balance from different perspectives.  Heat stress requires a focus on the management (i.e. minimization) of heat gainCold stress, in contrast, requires a focus on heat loss which adds a layer of risk not present during heat stress.
     When experiencing heat stress, the temperature of a person’s extremities cannot be driven higher than his/her core temperature by thermoregulatory means.  Internal, involuntary functions are activated to transfer heat from the core to the extremities and the surroundings; heat transfer ceases when temperatures equalize.
     In contrast, cold stress can result in large temperature differentials between the core and the extremities.  As the hypothalamus activates thermoregulatory functions to maintain core temperature, the extremities are actively neglected.  The human body will sacrifice its appendages for the survival of the being.
     Therefore, thermoregulation and heat balance, in the context of cold environments, must be considered in two stages:  (1) retention of full physical and cognitive function, and (2) survival of the being, i.e. maintenance of core temperature at all costs.  These two objectives have very different thermal requirements.
     The second stage represents emergency situations, where circumstances are beyond control.  This presentation focuses on the first stage, as it is more-frequently applicable to occupational scenarios.  Successful management of heat balance in the first stage precludes the emergency responses of the second.
 
     The form of heat balance equation used in this series is
     S = M + W + C + R + K + E + Resp,
where S is heat storage rate, M is metabolic rate, W is work rate, C is convective heat transfer (convection), R is radiative heat transfer (radiation), K is conductive heat transfer (conduction), E is evaporative cooling (evaporation), and Resp is heat transfer due to respiration.  Each value is positive (+) when the body gains thermal energy (“heat gain”) and negative (-) when thermal energy is dissipated to the surrounding environment (“heat loss”).  Each term can be expressed in any unit of energy or, if time is accounted for, power, but consistency must be maintained.  The following discussion provides some detail on each component of the heat balance equation in the context of cold environments, contrasting with that of hot environments where it improves clarity.
 
     As discussed in Part 2, the hypothetical “perfect” equilibrium is attained at S = 0.  In a cold environment, two equilibria are possible, corresponding to the stages referenced above.  Here, S = 0 is the target average to maintain full function.  A person can suffer a loss of function when S < 0 for an extended period, trends downward, or becomes exceptionally low.
     The average “normal” core temperature is 98.6° F (37° C).  Ideally, the drop in core temperature will be limited to 97° F (36° C), the temperature below which most people suffer some loss of physiological function.  Increasing core temperature significantly above normal can also be hazardous if sweating is induced (discussed further below).  Oral temperatures in the 97 –99° F (36 – 37° C) range are safe for most people in most situations.
 
     The rate at which the body generates heat, M, varies with activity.  Precise values are difficult to obtain; representative estimates, such as those shown in Exhibit 2, are usually used instead.  The value of M when a person is at rest under normal conditions is called the basal metabolic rate (BMR).
     A person’s size and weight, growth stage, diet, fitness, and drug use can affect his/her metabolic rate.  A young, growing, physically-fit person has a higher BMR than a comparably-sized older adult.  A lower BMR must be compensated for by other means to maintain heat balance in a cold environment.
     When the combination of BMR and activity are insufficient, shivering may be induced, adding as much as 400W to a person’s metabolic rate.  Shivering is an involuntary, uncoordinated activation of skeletal muscles with the sole purpose of heat production.  It reduces further heat loss, but cannot generate sufficient heat to replace that already lost (i.e. raise body temperature).

     The work rate (W) represents the portion of energy consumed in the performance of work that is not converted to heat.  Many formulations of the heat balance equation exclude this negative (heat-reducing) term, deeming it safe to ignore, as it is usually less than 10% of M.  However, in critical situations, the work rate may gain significance.

     By definition, the air temperature in a cold environment is below safe body temperature.  Therefore, the convection (C) component is negative.  In a cold environment with significant air movement, convection causes the greatest heat loss.  Convective heat loss is combatted with insulating layers of clothing.  The most effective combination of clothing layers depends on several personal, environmental, and behavioral variables.  Choices must be made among breathable, waterproof, and zippered garments, boots, hats, gloves, and other options to match their performance with the environmental conditions and the work to be performed.  Any exposed skin increases convective heat loss.

     The radiation (R) term will also be negative in most cases.  In open air, in direct sunlight, for example, significant radiation may be received.  However, the insulating layers protecting against other forms of heat loss will likely prevent an appreciable heat gain.  In the vast majority of situations, it is those insulating layers that prevent radiative heat loss.  In the absence of insulation (i.e. clothing) and appreciable air movement, radiation causes the greatest heat loss.

     Heat loss via conduction (K) is typically associated with handling of tools and materials and walking on cold surfaces.  Gloves and boots appropriate for the type of work being performed should be worn to minimize this.  If dexterity requirements prohibit the use of gloves, appropriate work/warming cycles must be implemented to prevent loss of dexterity and cold injury.  Even when conductive heat loss is small relative to total heat loss, cold hands and feet can have a disproportionate effect on comfort and performance.

     Evaporation (E) remains a significant source of heat loss.  In cold stress situations, the goal is to minimize evaporative cooling; it cannot be eliminated and must be managed.  “Insensible water loss” continues in cold conditions; this moisture must be dissipated to avoid wetting clothing and degrading its insulating properties.  The challenge becomes greater if strenuous activity induces sweating.  Rapid heat loss due to the combined effect of reduced insulation and excessive evaporation could lower one’s body temperature to an uncomfortable, if not dangerous, level.  While the humidity in the surrounding air has a profound impact on evaporative cooling under conditions of heat stress, it plays no appreciable role in cold environments.

     Heat loss due to respiration (Resp) also requires management.  For example, a face covering that allows exhaled water vapor to dissipate while retaining some of the heat transferred to the air while in the lungs could be a helpful complement to a worker’s wardrobe.  In addition to any respiratory heat that may be recaptured, the heat loss due to convection and radiation are also lowered by the reduced skin exposure.  Perceived comfort may also improve significantly by warming the nose, cheeks, and ears.

     The analogous representation of the human body’s heat balance as a mechanical balance scale, as shown in Exhibit 3, continues to be a helpful visualization.  It is quite intuitive and identification of the diurnal range on the scale provides an important reminder that some variation in core temperature is normal and unrelated to our thermal work environments.
     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).

References
[Link] Human Factors in Technology.  Edward Bennett, James Degan, Joseph Spiegel (eds).  McGraw-Hill Book Company, Inc., 1963.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “Hypothalamus” in Encyclopedia of Neuroscience.  Qian Gao and Tamas Horvath.   Springer, Berlin, Heidelberg; 2009.
[Link] “Thermal Indices and Thermophysiological Modeling for Heat Stress.”  George Havenith and Dusan Fiala.  Comprehensive Physiology; January 2016.
[Link] “Hypothermia and Cold Weather Injuries.”  Rick Curtis.  Princeton University Outdoor Action Program, 1995.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 6:  Measurement of Sound Exposure]]>Wed, 04 Oct 2023 06:09:39 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-6-measurement-of-sound-exposure     The measurement of sound pressure levels throughout a workplace is a fundamental component of noise-control and hearing-conservation initiatives.  It is the basis for exposure assessment and regulatory guidance.  Sound measurement and audiometry are opposite sides of the same coin.
     This installment of the “Occupational Soundscapes” series introduces basic concepts of sound level measurement and exposure assessment.  Equipment used, frequencies analyzed, calculation of a “dose,” and more are presented.  Like the presentation of audiometry (Part 5), its aim is to provide a level of understanding, within the constraints of this format, that engenders trust in an organization’s noise-related practices.
Types of Sound
     There are three types of sound to which a person may be exposed:  continuous, intermittent, and transient.  These are not unique to occupational settings, but understanding each is critical to defining an occupational soundscape.  Accurate definition of the soundscape is required to identify and implement appropriate noise-control and hearing-protection measures.
     Continuous or “steady-state” sound is nearly constant over long periods of time.  Generally, sound that varies less than ± 3 dB is considered continuous.  OSHA also treats sounds with level maxima at intervals up to one second as continuous.  Electric motor-driven equipment, fans, and turbines are common examples of continuous sound generators.  Continuous sounds are the most predictable and easily measured.  Thus, it is relatively straightforward to identify and implement appropriate countermeasures.
     Sounds that vary by more than ± 3 dB are termed intermittent.  This can include the “extreme” case of a sound being absent (“turned off”) at intervals during the measurement.  Less-extreme sound level variations are often called time-varying or fluctuating.  For example, automated equipment that performs a series of operations in a repeating cycle – workpiece transfers, pressurization and exhaust, enclosure closing and opening, clamping and unclamping, etc. – can generate highly-variable sound pressure levels (SPLs).
     Fluctuating sounds may or may not be predictable.  A single automated workcell may generate predictable sound levels as it executes highly-repeatable process cycles.  In contrast, the interaction of a variety of equipment, processes, and schedules can make predicting sound levels highly impractical.  Quantifying such a variable soundscape may require the use of averages or other sound level definitions for a “typical” workday.  For example, L10, L50, and L90 levels may be cited to describe the variation in the environment.  These correspond to the sound level exceeded during 10%, 50%, and 90%, respectively, of the measurement period.  For a “normal” workday, the notation L90,8hr, for example, may be used to specify the measurement period.  Varying sound levels may also necessitate the use of dose calculations (see “Exposure Indices,” below).
     Transient sounds exist only for a very brief period of time (often less than one second); these may also be called instantaneous sounds.  There are two types of transient sounds – impact and impulse (or impulsive) sounds.  Impact sounds are generated by the collision of two solid objects, such as parts falling into a bin.  Impulse sounds are generated by rapid releases of energy, such as explosions, gunfire, or pressurized fluid release (i.e. burst or relief).
     Transient sounds pose the greatest risk to hearing for three interrelated reasons.  They often occur at sufficiently-high levels (e.g > 140 dB) to cause immediate permanent hearing loss.  They are often unpredictable; therefore, proper protections may not be in place (e.g. use of HPDs) when they occur.  They are also more difficult to measure, defying the accurate definition needed to implement appropriate countermeasures.
     Results of some studies suggest that transient sounds exhibit a synergistic effect with continuous sound.  That is, adding transient sounds to a “background” of continuous sound has a greater effect on hearing than SPL addition might indicate.  This combined-sound scenario is common in industrial settings, reinforcing the need to pay special attention to transient sounds in the occupational soundscape.
 
Measured Frequencies
     Sound levels are measured in octave bands identified by their nominal center frequencies.  An octave band is a range of frequencies where the upper end is twice the lower end (fupper = 2 x flower Hz).  All sound transmitted at frequencies within an octave band are aggregated and associated with the band’s center, or nominal, frequency.  A “full” octave is called a 1/1 octave band; the “1/1” designation is often dropped, as it is assumed when absent.
     For a more-refined analysis of a soundscape, sound can be measured in 1/3 octave bands.  As the name implies, each octave is subdivided into three bands, each with its own center frequency to which measured sound levels correspond.  The 1/1 and 1/3 octave bands in common use for workplace noise assessments are tabulated in Exhibit 1.  Measurements can be conducted in narrower frequency ranges, with the right instrument, but the 1/1 and 1/3 octave bands should suffice until noise control practitioners gain experience and sophistication.
Frequency Weighting
     Several frequency-weighted sound level scales have been developed for various purposes.  Those relevant to hearing conservation programs are A-weighting and C-weightingA-weighting (dBA) is used to model human perception of loudness of sound.  C-weighting (dBC) is typically used in the assessment of hearing protection device (HPD) effectiveness.  Z-weighting (dBZ) is often used when measuring exposures to transient sounds.  It is an unweighted measurement scale (making it something of a misnomer); therefore, dBZ ≡ dB and the Z-weighting designation is often dropped.
     The A- and C-weighting factors are tabulated in Exhibit 2 and shown graphically in Exhibit 3.  Although 1/3 octave bands are shown, it is common practice to use 1/1 octave band (highlighted rows) measurements in workplace assessments.
     An example data set converted to A- and C-weighted sound levels is shown in Exhibit 4.  The sound level measurement in each frequency band is adjusted by the corresponding weighting factor.  To determine the overall weighted sound level, the sound levels at all frequencies are added (see “SPL Addition” in Part 4).  Comparing sound levels attained for the different scales demonstrates how significant weighting can be to assessment results.
     B-weighting (dBB) is also included in the example for comparison, though it is no longer in common use.  It was originally conceived of as a scale for “medium-level” sounds, but has failed to offer sufficient utility to justify its retention in addition to A- and C-weightingD-weighting (dBD) was developed for the unique characteristics of aircraft noise.  Its use is currently limited to non-bypass jet engines on military aircraft.  It is included in the graph of Exhibit 3, but precise weighting values are not readily available.
     If only A- and C-weighted sound levels (LA and LC, respectively) are known, some insight into the frequency composition of the sound can still be gained.  When higher frequencies are prevalent, LA > LC and when lower frequencies are prevalent, LA < LC.  When LA and LC are nearly equal, the sound is not dominated by frequencies at either end of the range.  Review the weighting factors in Exhibit 2 or the graph in Exhibit 3 to see why this is true.
 
Exposure Indices
     The magnitude of a person’s exposure to sound energy can be expressed in various ways.  The choice of exposure index may be influenced by several factors, such as:
  • measurement equipment in use,
  • purpose of the measurement; e.g. the standard to which compliance is sought,
  • type(s) of sounds to which a person is exposed,
  • perceived risk of exposure,
  • amount of measurement data attainable, and
  • characteristics of a person’s workday; e.g. assignment rotations, etc.
Some considerations relevant to the context of workplace noise assessments are explored below as the most-relevant indices are presented.
     Exposure to continuous sound is well-defined by a single measurement, utilizing A-weighting and expressed in dBA.  OSHA defines the maximum duration of exposure allowable at A-weighted SPLs from 80 – 130 dBA; this information is tabulated in Exhibit 5.  This “reference duration” can also be calculated:
The valid range for this calculation is also 80 – 130 dBA.  Below 80 dBA, indefinite exposure is acceptable; there is no appreciable risk to hearing.  Continuous exposures above 130 dBA are impermissible for any duration (TOSHA, >130dBA = 0).
     Other organizations determine the reference duration with different parameters.  The generic form of the reference duration calculation is:
where:
  • Tn = reference (i.e. permissible) duration of exposure at LA, hours;
  • Tc = duration of exposure, hours;
  • LA = A-weighted sound level of exposure, dBA;
  • Lc = criterion level, dB;
  • ER = exchange rate, dB.
     The criterion level is the permissible sound level exposure for a “standard” 8-hour workday.  OSHA’s criterion level is 90 dBA, while NIOSH sets it at 85 dBA.
     The exchange rate is also called the “doubling rate.”  It is the increase in sound level exposure permissible when exposure duration is halved.  Alternatively, it is the decrease in sound level exposure required to double the permissible exposure duration.  OSHA’s exchange rate is 5 dB, while NIOSH prescribes a 3-dB ER.
     NIOSH, in its “Criteria for a Recommended Standard,” provides some history of ER development and the current disparity between its specified exchange rate and that of OSHA.  NIOSH demonstrates how a 3-dB ER accurately reflects the amount of sound energy a person is exposed to when conditions change.  That is, the exchanged time and sound level are shown to be equivalent in energy terms.  This equivalency is known as the equal-energy rule.
     Intermittent sounds are not fully defined by a single measurement.  The L10, L50, and L90 levels introduced previously can provide significant information about a soundscape, but its definition remains incomplete.  To fully account for the entirety of exposure, a person’s “noise dose” can be determined.  Doing so can require a large number of measurements if a person’s exposure changes frequently during a workday.
     A person’s noise dose, D, is calculated as follows:
where Ci is the duration of exposure at a specific sound level and Ti is the reference duration at that sound level.  The OSHA reference duration calculation is shown above; for NIOSH, use the generalized formula with Lc = 85 dB and ER = 3 dB.
     A 100% noise dose indicates that a person’s average exposure to sound levels equals the permissible exposure limit (PEL).  It is important to remember that a 100% dose may be reached long before the workday has ended!
     If the variations in a person’s exposure are cyclical and, therefore, predictable, the anticipated daily noise dose can be determined with fewer measurements.  For example, a worker performs a series of tasks that require two hours to complete and repeats the pattern four times per workday.  The person’s exposure data for one 2-hour cycle can be extrapolated to an anticipated 8-hour dose using the dose rate, DR:
For our example, the worker receives a 25% dose in 120 minutes; DR = 0.21%/min.  Multiplying by the full duration of the workday (480 min) yields a 100% dose (rounding error may be introduced).  Use of dose rates can significantly reduce the monitoring workload in predictable soundscapes.
     A time-weighted average (TWA) may be a more intuitive index.  To convert a daily dose to an 8-hour TWA, the following calculation is performed:
The exchange rate (ER) and criterion level (Lc) must be consistent with the standard used to determine the dose (e.g. NIOSH, OSHA, etc.).  At 100% dose, TWA = Lc.  OSHA provides a table of converted values (see Exhibit 6) that may expedite TWA determinations.
     An alternative representation of a person’s sound exposure is provided by the equivalent sound exposure level, LAeq,T, known in recent standards as the equivalent continuous sound level, LAT.  Conceptually, LAT is the logarithm of the ratio of sound exposure to exposure duration.  Exposure can be integrated over any time period; when the measurement period is eight hours, LAT ≡ TWA.
     The pressure changes associated with transient sounds can occur so rapidly that the sound exposure can be integrated over a one-second time period.  This “equivalent” measure is called the sound exposure level (SEL).  Integrating the same exposure over an 8-hour time period yields an Leq that is 44.6 dB lower than the SEL (SEL = Leq,8hr + 44.6 dB), an indication of the ferocity of transient sounds.
 
Sound-Measurement Equipment
     Development of sound-measurement equipment over several decades has provided sophisticated measurement and analysis tools to industrial hygienists and researchers alike.  A thorough treatment of these instruments and their capabilities is beyond the scope of this series.  However, an introduction to the types of equipment commonly used to support hearing conservation in occupational settings could prompt participants to conduct necessary research and enlist the assistance of knowledgeable and experienced specialists.
Microphones
     Due to their use in musical and theatrical performances, public address systems, and other applications, microphones seem familiar to most people.  However, few understand the principles of operation or why a particular microphone is used in each application.
     Three types of microphones may be used in sound measurement, the choice depending on the nature of the sound field.  Proper orientation of a microphone in a sound field is defined by the type in use; a pictorial summary is provided in Exhibit 7.
     A direct-incidence microphone is placed parallel to the sound wave’s direction of travel, or “pointing at” the sound source.  This type is also known as a free-field or normal-incidence microphone.  This type of microphone is well-suited for use with a single sound source and in the absence of reflections.
     A pressure microphone is positioned perpendicular to the sound wave’s travel.  For this reason, this type is sometimes called a grazing microphone.  Pressure microphones are often used in calibration of audiometric equipment or flush-mounting in a wall, baffle, or similar surface.
     A random-incidence microphone should be oriented at approximately 70° from the primary sound source’s wave propagation direction.  This type is also called an omni-directional microphone to describe its ability to process multiple sound sources and reflections simultaneously.  It is this capability that makes them well-suited for many industrial environments.
     In addition to its type, a microphone’s physical characteristics (e.g. diameter) can determine its suitability to a particular application.  Exhibit 8 demonstrates this by comparing the frequency responses of several microphones.  To get the best results, properties of the sound field and measurement equipment must be compatible.
Sound Level Meters
     Though presented separately, above, microphones are typically integral to sound level meters (SLMs) used in occupational noise assessments.  These are usually handheld devices that also carry a display and setup capabilities onboard.  This discussion of SLMs is somewhat superficial by necessity.  The technology behind their operation is far beyond the scope of this series; here, the focus is on basic functionality and operator interface.
     Two classes of SLM are defined in current standards.  Class 2 devices are “general purpose” units and may suffice for preliminary assessments and other applications with less-stringent requirements.  Class 1 devices are precision instruments with higher performance characteristics.  A comparison of specifications for Class 1 and Class 2 SLMs is provided in Exhibit 9.
     Preparing an SLM for an assessment consists of selecting several parameters according to the sound field to be studied and the purpose of the study.  Modern SLMs typically offer A-, C-, and Z-weighting options (see “Frequency Weighting,” above) to accommodate a variety of needs.  The user can also choose to make 1/1 or 1/3 octave band measurements.  Some models offer additional octave band options to refine measurements further.
     The time constant (τ) to be used for measurements must also be selected.  An SLM’s time constant is a measure of its responsiveness to changes in sound levels; the shorter the time constant, the more rapidly the instrument responds to changes in the input.  The time constant is defined as the time required for a response curve to reach 63% of the maximum value of a step change in the input.
     In the occupational noise context, a slow (S) time constant (τ = 1 s) is typically used to smooth the response curve when the input is highly variable.  A fast (F) time constant (τ = 0.125 s) is used when the range of sound levels (i.e. min, max) is of greatest interest.  If fast-response (F) readings fluctuate more than ±4 dB, the relative stability of slow-response (S) readings may be preferable.
     Two other time constants are considered “legacy” settings and may not be available on all instruments.  The impulse (I) time constant (τ = 0.005 s) is well-suited for measurements of impact noises.  The peak hold (τ = 0.00005 s) setting is used to capture the maximum level of extremely-short-duration transient sounds, such as gunshots.
     Compliance with a standard will dictate some or all of the settings and equipment types needed.  Understanding the context of the noise assessment is critical to obtaining useful results.
Dosimeters
     A dosimeter is, in essence, a sound level meter with additional computing capability and a modified physical form.  Either miniaturization of the instrument or separation of the microphone from the body of the unit makes it feasible for a person to wear a dosimeter for an entire workday without interfering with normal activities.  Positioning a microphone in a worker’s “hearing zone” – within 12 in (30.5 cm) of the ear canal – is untenable with a standard SLM.
     The requirements of various standards (OSHA, NIOSH, etc.) may be preprogrammed in the dosimeter; assessment per such standards are as simple as a setting selection.  Some units allow custom profiles, called virtual dosimeters, to be programmed; various exchange rates, criterion levels, and threshold levels can be entered manually.  The threshold level is the sound level below which the instrument does not integrate the exposure.  Stated another way, sounds below the threshold level are not included in the noise dose calculation.
     The large amount data that is often required for dose calculations is held in onboard memory until it can be transferred to a computer for further analysis or long-term retention.  Lengthy measurement periods, in addition to onboard data processing, make battery life another important factor for consideration in the selection of devices.
 
     Frequency analyzers, spectrum analyzers, real-time analyzers (RTAs), and similarly-named instruments are also variations on the SLM theme.  Narrower octave bands, simultaneous measurements, graphical displays, and other features support advanced analysis.  Further exploration of these devices here is unwarranted; readers are encouraged to investigate the capabilities of these instruments after mastering the foundational elements of sound measurement and hearing conservation covered in this series.
 
 
     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] “Hearing Protection.”  Laborers-AGC Education and Training Fund; July 2000.
[Link] “Criteria for a Recommended Standard - Occupational Noise Exposure, Revised Criteria 1998.”  Publication No. 98-126, NIOSH, June 1998.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.
[Link] ”29 CFR 1910.95 - Occupational noise exposure.’  OSHA.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “The Impact of Threshold, Criterion Level and Exchange Rate on Noise Exposure Data Results.”  TSI Incorporated; 2020.
[Link] “Noise and Vibration.”  Evan Davies in Plant Engineer’s Reference Book, 2ed.  Dennis A. Snow, ed.  Reed Educational and Professional Publishing Ltd.; 2002.
[Link] “Sound level meter.”  Wikipedia.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 5:  Audiometry]]>Wed, 20 Sep 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-5-audiometry     Audiometry is the measurement of individuals’ hearing sensitivity using finely-regulated sound inputs.  It is a crucial component of a hearing loss prevention program (HLPP) with an emphasis on the range of frequencies prevalent in speech communication.  To be valid, audiometric testing must be conducted under controlled conditions and the results interpreted by a knowledgeable technician or audiologist.
     This installment of the “Occupational Soundscapes” series provides an introduction to audiometry, requirements for equipment, facilities, and personnel involved in audiometric testing, and the presentation and interpretation of test results.  It targets, primarily, those enrolled in – as opposed to responsible for – an HLPP.  Its purpose is to develop a basic understanding of a critical component of hearing conservation efforts to, in turn, engender confidence in the administration of procedures that may be foreign to many who undergo them.
The Audiologist
     Audiometric testing is conducted by an audiologist, audiometric technician, audiometrist, or physician.  Distinctions among these roles are not important to the present discussion; therefore, the term audiologist will be applied to any competent administrator of audiometric tests.
     Demonstration of competency in audiometric testing is typically achieved by attaining certification from the Council for Accreditation in Occupational Hearing Conservation (CAOHC) or equivalent body.  Physicians, such as otolaryngologists, are certified by their respective medical boards.
     The Occupational Safety and Health Administration (OSHA) requires that audiometric testing be administered by a licensed or certified audiologist, physician, or technician capable of obtaining valid audiograms (see “The Results,” below) and maintaining test equipment in proper working order.  OSHA does not require technicians operating microprocessor-controlled (i.e. automated) audiometers (see “The Equipment,” below) to be certified, but the National Institute for Occupational Safety and Health (NIOSH) rejects this exemption.
 
The Facility
     Audiometric testing is typically conducted in one of three types of test facility – onsite, mobile, or clinical.  Each has unique characteristics that must be considered to determine which is best-suited to an organization and its HLPP.
     An onsite test facility utilizes dedicated space within an organization where an audiometric test booth is permanently installed.  An onsite facility is typically feasible only for large organizations with more than 500 noise-exposed employees enrolled in an HLPP at a single location.  Dedicated facilities often require full-time professional staff, further limiting the range of organizations for which onsite facilities are appropriate.
     Mobile test facilities may be provided by a third-party contractor to support an organization’s HLPP.  This may be an appropriate solution for an organization with multiple operations throughout a region when the number of employees enrolled in the HLPP at each location is relatively small.
     A clinical test facility is an independent medical or occupational health practice.  Employees schedule audiometric tests as they would an eye exam, annual physical checkup, or other outpatient procedure.  For smaller entities or programs, this is often the most practical choice.  Administration by an independent brick-and-mortar medical practice may also increase employees’ confidence in the HLPP, providing a psychological benefit that is difficult to quantify.
     The facility, regardless of the type chosen, must be sufficiently isolated to prevent interference with audiometric testing.  Vibrations, ambient noise, and distracting sounds must be minimized to ensure a valid audiogramMaximum Permissible Ambient Noise Levels (MPANLs) are defined in standards and regulations (e.g. ANSI S3.1, CFR29 Part 1910.95) for various types of test equipment.  It is important to note that sounds below the required MPANL, such as phone alerts, conversation, or traffic, can still be distracting and should be avoided.
 
The Equipment
     The two pieces of equipment most relevant to this discussion are the audiometer and the earphone.  There are three types of audiometer that may be encountered in an HLPP – manual, self-recording, and computer-controlled.  In the context of occupational hearing conservation, pure-tone, air-conduction audiometers are used; other types (e.g bone-conduction) may be utilized for advanced analysis and diagnosis.
     Using a manual audiometer, the audiologist retains control of the frequency, level, and presentation of tones and manually records results.  This is the least sophisticated, thus least expensive, type of audiometer.  It is also the most reliant upon an audiologist’s skill and concentration.
     A self-recording, or Békésy audiometer (named for its inventor) controls the frequency and level of tones, varying each according to test-subject’s responses; test results can be difficult to interpret.  This type of audiometer is no longer in common use in occupational HLPPs; its use is more common in research settings where its finer increments of tone frequency and level control are advantageous.
     Computer-controlled audiometers are prevalent in modern practice.  Continually-advancing technology has improved reliability and added automated functions, such as data collection, report generation, and test interruption for excessive ambient noise.  Stand-alone units may be called microprocessor audiometers; they also perform automated tests, but have fewer capabilities and cannot be upgraded as easily as software residing on a PC.
     There are also three types of earphone available for audiometric testingsupra-aural, circumaural, and insert.  A more-precise (“technical”) term for an earphone is “transducer;” “headset” or “earpiece” is more colloquial.
     Supra-aural earphones consist of two transducers, connected by a headband, that rest on the test subject’s outer ears; no seal is created.  Therefore, little attenuation is provided, requiring increased diligence in control of ambient sounds.
     Circumaural earphones consist of two transducers, housed in padded “earmuffs” that surround the ears, connected by a headband.  The seal provided by the earmuffs, though imperfect, provides significantly greater attenuation of ambient sound than supra-aural earphones.
     Insert earphones consist of flexible tubes attached to foam tips that are inserted in the ear canal.  The foam tip seals the ear canal, providing the greatest attenuation of ambient sound.  Test tones are delivered directly to each ear via the flexible tubes; the lack of physical connection between the transducers reduces the opportunity for transmission of tones from the tested ear to the “silent” ear.
     Some test subjects may experience discomfort, particularly when using insert earphones, which could lead to distraction that influences test results.  Recognizing signs of discomfort, distraction, or other interference is among the required skillset of an effective audiologist.
     Evidence suggests that the choice of earphone does not significantly affect test reliability.  However, earphones and audiometers are not interchangeable; an audiometer must be calibrated in conjunction with a paired earphone to provide valid test results.
 
The Test
     A typical audiometric test does not evaluate the entire frequency range of human hearing capability (~20 ~ 20,000 Hz).  Instead, the focus of testing is on the range of critical speech frequencies introduced in Part 2 of the series.  Specific test frequencies used are 500, 1000, 2000, 3000, 4000, and 6000 Hz.  Testing at 8000 Hz is also recommended for its potential diagnostic value; testing at 250 Hz may also be included.
     Each ear is tested independently by delivering pure tones at each frequency and varying levels, usually in 5 dB increments.  The minimum level at which a subject can hear a tone a specified proportion of the times it is presented (e.g. 2 of 3 or 3 of 5) is the person’s hearing threshold at that frequency.  Consecutive tests indicating thresholds within ±5 dB are typically treated as “consistent,” as this level of variability is inherent to the test.
     A single audiometric test may identify a concern, but multiple tests are needed to identify causes and determine appropriate actions.  The first test conducted establishes the subject’s baseline hearing sensitivity.  The subject should limit exposure to less than 80 dB SPL for a minimum of 14 hours prior to a baseline test, without the use of hearing protection devices (HPDs).  Some test protocols reduce the quiet period to 12 hours minimum or allow use of HPDs, but an extended period of “unprotected rest” is preferred.
     A baseline test is required within 6 months of an employee’s first exposure to the loud environment, though sooner is better.  Ideally, a baseline is established prior to the first exposure, thus eliminating any potential influence on the test results.
     Monitoring tests are conducted annually, at minimum.  They are often called, simply, annual tests, though more frequent testing is warranted, or even required, in some circumstances.  Monitoring tests are conducted without a preceding “rest” period, at the end of a work shift, for example.  Doing so provides information related to the effectiveness of HPDs, training deficiencies, etc.
     A retest is conducted immediately following a monitoring test indicating a 15 dB or greater hearing loss in either ear at any of the required test frequencies.  This is done to correct erroneous results caused by poor earphone fitment, abnormal noise intrusions, or other anomaly in the test procedure.
     A confirmation test is conducted within 30 days of a monitoring test indicating a significant threshold shift (discussed further in “The Results,” below).  Confirmation test protocols mimic those of a baseline test to allow direct comparison.
     Exit tests are conducted when an employee is no longer exposed to the loud environment.  This may also be called a transfer test when the cessation of exposure is due to a change of jobs within the organization, rather than termination of employment.  Exit test protocols also mimic those of a baseline test, facilitating assessment of the impact of working conditions over the course of the subject’s entire tenure.
 
The Results
     The results of an audiometric test are recorded on an audiogram; a blank form is shown in Exhibit 1.  Tone frequencies (Hz) are listed on the horizontal axis, increasing from left to right.  On the vertical axis, increasing from top to bottom, is the sound intensity level scale (dB).  This choice of format aligns with the concept of hearing sensitivity; points lower on the chart represent higher intensity levels required for a subject to hear a sound and, thus, lower sensitivity to the tested frequency.
     The audiogram shown in Exhibit 2 places examples of familiar sounds in relative positions of frequency and intensity.  Of particular interest is the “speech banana” – the area shaded in yellow that represents typical speech communications.  Presented this way, it is easy to see why differentiating between the letters “b” and “d” can be difficult.  These letters hold adjacent positions at the lower end of the speech frequency range, where several other speech sounds are also clustered.  This diagram also reinforces the idea that the ability to hear chirping birds and whispering voices are among the first to be lost; they are high-frequency, low-intensity sounds.
     Visual differentiation of data for each ear is achieved by using symbols and colors.  Each data point for a subject’s left ear is represented by an “X,” while each data point for the right ear is represented by an “O.”  Colors are not required; when they are used, the convention is to show left-ear data in blue and right-ear data in red.  The increased visual discrimination facilitates rapid interpretation of test results, particularly when all data for a subject are shown in a single diagram.  When baseline data are shown on a monitoring audiogram, the baseline data is typically shown in grey to differentiate between historical and current test data.
     The vertical scale represents a person’s hearing threshold – the minimum sound intensity level required for the test tone to be heard.  An example audiogram, representing “normal” hearing using the formatting conventions described above, is shown in Exhibit 3.  Sound stimuli above the line on the audiogram are inaudible; only those on or below the line can be heard by the subject.  Widely-accepted definitions of the extent of hearing loss are as follows:
  • normal hearing:  < 20 dB hearing level;
  • mild hearing loss:  20 – 40 dB hearing level;
  • moderate hearing loss:  40 – 70 dB hearing level;
  • severe hearing loss:  70 – 90 dB hearing level;
  • profound hearing loss:  > 90 dB hearing level.
     The example audiogram in Exhibit 4 also demonstrates the use of symbols and colors to differentiate data, though the dual-chart format makes it less critical.  The data is also tabulated to provide precise threshold levels for each frequency.
     A significant drop in sensitivity, in both ears, at 4000 Hz is depicted in Exhibit 4.  This is the infamous “4K notch,” indicative of noise-induced hearing loss (NIHL).  The appearance of this notch or other deviation from normal hearing should elicit an appropriate response.
     The presence of a notch in a baseline audiogram suggests that permanent hearing loss has already occurred.  Appropriate measures must be taken to ensure that no further damage occurs.  Furthermore, additional assessments may be necessary to ensure that the subject’s abilities are compatible with the work environment.  If diminished communication abilities creates a hazard for the subject or others, for example, an appropriate reassignment should be sought.
     The appearance of a notch or other decline in hearing sensitivity in a monitoring audiogram should trigger follow-up testing.  A retest is conducted to ensure the validity of the data by verifying that the facility and equipment are operating within specifications and the test was conducted properly by both the subject and audiologist.  NIOSH recommends retesting when a monitoring audiogram indicates a 15 dB or greater increase in hearing level, relative to the baseline audiogram, at any frequency from 500 to 6000 Hz.
     If the monitoring and retest audiograms are consistent, two parallel paths are followed.  On one path, the subject undergoes a confirmation test to determine if the indicated hearing loss is permanent.  Appropriate follow-up actions are determined according to the results of this test.
     On the other path, HPD use and effectiveness is reviewed to determine necessary changes to the individual’s work process or to the HLPP more broadly.  Other changes to the work environment may also be necessary; noise-control strategies will be discussed further in future installments of this series.
     The decline in hearing sensitivity represented by a lower line on an audiogram is called a threshold shift.  When the arithmetic average of the differences between the baseline and monitoring audiograms at 2000, 3000, and 4000 Hz exceeds 10 dB in either ear, a standard threshold shift (STS) has occurred.  An STS is depicted in the comparative audiogram of Exhibit 5; calculation of the shift’s magnitude is shown in the table.
     If the change in hearing sensitivity is shown by confirmation testing to be irreversible, a permanent threshold shift (PTS) has occurred.  Some level of hearing loss is recoverable with “rest” in a quiet setting.  This change is called a temporary threshold shift (TTS).  Appropriate action must be taken to prevent a TTS from becoming a PTS.
     A baseline audiogram represents a person’s “best-case” hearing or maximum sensitivity.  Therefore, if subsequent testing results in a “better” audiogram than the baseline, the baseline is replaced by the new audiogram.  This can occur if influences on the baseline test were not noticed or properly addressed.  Examples include an insufficient rest period preceding the test, intrusive noise or vibration in the test chamber, and suboptimal earphone fitment.
     Results other than a pronounced 4K notch can also prompt additional testing.  The series’ focus remains on NIHL; therefore, only a brief overview will be provided here.  Interested readers are encouraged to consult other sources for additional information.
     Bone-conduction testing is performed with transducers placed behind the ears.  This type of test may be warranted to diagnose an occlusion of the ear canal, which can include impacted cerumen (“earwax”), or other condition of the outer or middle ear that limits air-conducted hearing.  Conductive hearing loss is suggested by differences between air- and bone-conduction thresholds of greater than 10 dB.  An example audiogram depicting this condition in one ear is shown in Exhibit 6.
     A positively-sloped audiogram, such as that shown in Exhibit 7, depicts higher sensitivity to higher frequencies, often indicative of a disorder of the middle or inner ear.  In the case of Meniere’s disease, for example, audiometric testing may be used to validate a medical diagnosis, whereas the reverse is often true for other conditions.
     A negatively-sloped audiogram, such as that shown in Exhibit 8, depicts lower sensitivity to higher frequencies, often indicative of the advancement of presbycusis (age-related hearing loss).  Guidance on the appropriate use of an audiogram of this nature in the context of an HLPP varies.  A non-mandatory age-adjustment procedure remains in the OSHA standard (CFR 29 Part 1910.95 Appendix F), though NIOSH has rescinded support for the practice of “age correction”.  Organizations utilizing age-adjusted audiograms should consider that OSHA regulations tend to follow NIOSH recommendations; the lag on this specific matter has been quite long already.
The Bottom Line
     Noise-induced hearing loss (NIHL) is the accumulation of irreparable damage to the inner ear, particularly the fine hairs of the cochlea (see Part 2).  Hearing loss usually occurs in higher frequencies first.  The focus of audiometric testing on speech communication leads us to define “high frequency” as the 3000 – 6000 Hz range, where the 4K notch is of particular concern.  Hearing loss in frequencies above 8000 Hz often go undiagnosed as the highest frequencies in the audible range are rarely tested.
     NIHL is one of several possible causes of hearing impairment.  Other causes include hereditary conditions, exposure to ototoxic substances, and illness (i.e. infection).  The various audiometric tests are valuable tools beyond the scope of NIHL; they can also aid diagnosis of several other conditions.  For example, a baseline audiogram may confirm the presence of a congenital disorder, or a confirmation test may reveal that an STS was caused by an illness from which, in the interim, the subject had recovered.
     A thorough, well-crafted health and wellness program will include audiometric testing.  In addition to the direct benefits of an HLPP, information about other conditions may also be obtained, further improving the work environment.  Psychological well-being of employees can be improved via increased effectiveness of verbal and nonverbal communication, in addition to the physical health benefits that participation in such a program can provide.


     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] “Hearing Protection.”  Laborers-AGC Education and Training Fund; July 2000.
[Link] “Criteria for a Recommended Standard - Occupational Noise Exposure, Revised Criteria 1998.”  Publication No. 98-126, NIOSH, June 1998.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.
[Link] ”29 CFR 1910.95 - Occupational noise exposure.’  OSHA.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Pediatric Audiology:  A Review.”  Ryan B. Gregg, Lori S. Wiorek, and Joan C. Arvedson.  Pediatrics in Review, July 2004.
[Link] “Familiar Sounds Audiogram:  Understanding Your Child’s Hearing.”  Hearing First, 2021.
[Link] “Hearing and Speech.”  University of California – San Francisco, Department of Otolaryngology – Head and Neck Surgery.
[Link] “Audiograms.”  ENT Education Swansea.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 4:  Sound Math]]>Wed, 06 Sep 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-4-sound-math     Occupational soundscapes, as outlined in Part 1, are comprised of many sounds.  Each has a unique source and set of defining characteristics.  For some purposes, treating all sounds in combination may be appropriate.  For others, the ability to isolate sounds is integral to the purpose of measuring sound levels.
     Of particular importance to a hearing loss prevention program (HLPP) is the ability to add, subtract, and average contributions to the sound pressure level (LP, SPL) in a workplace.  The ratios and logarithms used to calculate SPLs, presented in Part 3, complicate the arithmetic, but only moderately.  This installment of the “Occupational Soundscapes” series introduces the mathematics of sound, enabling readers to evaluate multiple sound sources present in workers’ environs.
     As mentioned in Part 3, sound pressure is influenced by the environment.  The number of sources, the sound power generated by each, and one’s location relative to each source contribute to the sound pressure level to which a person is subjected.

SPL Addition
     A representative example of a typical application will be used to place SPL addition in context.  This should make it easier to understand the process and its value to hearing conservation efforts.  Consider a manufacturing operation with several machines running in one department.  The company’s industrial hygienist has tasked the department manager with reducing the noise to which operators are exposed.  With a capital budget insufficient to replace machines or make significant modifications to the facility, the manager concludes that the only feasible option is to schedule work within the department such that, at all times, some machines remain idle (i.e. quiet).  To determine a machine schedule that will yield acceptable noise exposures while meeting production demands, SPLs generated by each machine are added in various combinations.
     A baseline for comparison must be established to evaluate sound-level reduction results.  The baseline SPL includes sounds from all sources and can be established by either the formula method or the table method.
     Using the formula method, the total (i.e. combined) SPL generated by n sources is calculated with the following equation:
where LPt is the total SPL and LPi is the SPL generated by the ith source.  The calculation of total SPL for our example, which includes five machines, is tabulated in Exhibit 1, where it is found to be 96.8 dB.
     To use the table method, first sort the source SPLs in descending order.  Compare the two highest SPLs and determine the difference.  Find this value in the left column of the table in Exhibit 2 and the corresponding value in the right column.  Interpolation may be necessary, as only integer differences are tabulated.  Add the value from the right column to the higher SPL to obtain the combined SPL to be used in subsequent iterations.
     Compare the combined SPL to the next source in the sorted list, repeating the process described until all source SPLs have been added or they no longer contribute to the total SPL.  When adding a large number sources, sorting SPLs first may allow the process to be abbreviated; once the difference between the combined SPL and the next source exceeds 10 dB, the remainder of the list need not be considered.
     A pictorial representation of the cascading calculations performed in the table method of SPL addition is provided in Exhibit 3, where the total SPL for our example is found to be 96.5 dB.  This result differs slightly from that attained by the more-precise formula method, but this need not be a concern.  The reduced complexity of computation often justifies the sacrifice of accuracy.  A 0.3 dB difference, like that found in our example, is imperceptible to humans and is, therefore, inconsequential.  While some circumstances warrant use of the formula method, the table method of SPL addition provides a convenient estimate without the need for a calculator.
     The department manager proposes running the machines in two groups – machines 1, 3, and 5 will run simultaneously, alternating with machines 2 and 4.  Total SPL calculations for each machine grouping, using the formula method and the table method, are shown in Exhibit 4 and Exhibit 5, respectively.
     Total SPL results are the same for both methods – 93.3 dB for machine group (1, 3, 5) and 94.3 dB for machine group (2, 4).  This represents 3.5 dB and 2.5 dB reductions, respectively, from the baseline SPL (all machines running).  These are consequential reductions in noise exposure; the proposal is accepted and the new machine operation schedule is implemented.  The total SPL remains high, however, and further improvements should be sought.

SPL Subtraction
     To determine the SPL contribution of a single source, the “background” SPL is subtracted from the total.  Like SPL addition, there are two methods.
     Consider the 5-machine department presented in the SPL addition example; for this example, the SPL attributed to machine 3 is unknown.  With machine 3 turned off, the total SPL in the department is 95.5 dB; this is considered the background level with respect to machine 3.  Recall that the total SPL with all machines running is 96.8 dB.
     To subtract the background SPL by the formula method, use the following equation:
where LPi is the SPL of the source of interest, LPt is the total SPL (all machines running), and LPb is the background SPL (source of concern turned off).  In our example, LP3 = 10 log (10^9.68 – 10^9.55) = 90.9 dB.  The result can be verified using SPL addition:  LPt = 10 log (10^9.55 + 10^9.09) = 96.8 dB.
     To determine the SPL attributed to machine 3 by the table method, find the difference between the total and background SPLs (96.8 dB – 95.5 dB = 1.3 dB) in the left column of the table in Exhibit 6.  Subtracting the corresponding value in the right column of the table (~ 6.0 dB) from the total SPL gives the machine 3 SPL (96.8 dB – 6.0 dB = 90.8 dB).  Again, interpolation causes a small variance in the results that remains inconsequential.

SPL Averaging

     Measurements may be repeated across time or varying conditions.  In our 5-machine department example, this may be to document different machine combinations or sounds generated during specific operations.  In the latter scenario, an average SPL may be a useful, though simplified, characterization of the environment.
     SPLs are averaged using the following formula:
where n is the number of measurements to be averaged and LPi is an individual measurement.
     As an example, the SPLs of the 5-machine department example will be reinterpreted as multiple measurements in a single location.  Averaging the five SPL values (88.0, 92.5, 91.0, 89.5, and 83.5 dB) using the equation above gives LPavg = 88.9 dB.  When the range of SPLs to be averaged is small (e.g. < 5 dB), the arithmetic average can be used to approximate the decibel average.  The arithmetic and decibel average calculations for this example are shown in Exhibit 7.  Arithmetic averaging provides a convenient estimation method, but the decibel average should be calculated for any “official” purpose, as the two rapidly diverge.

     In the coming installments of the “Occupational Soundscapes” series, the connections between previous topics and hearing conservation begin to strengthen.  The discussion of audiometry brings together the physiological functioning of the ear (Part 2), speech intelligibility (introduced in Part 2), and the decibel scale (Part 3) to lay the foundation of a hearing loss prevention program.

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] An Introduction to Acoustics.  Robert H. Randall.  Addison-Wesley; 1951.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Noise Navigator Sound Level Database, v1.8.” Elliot H. Berger, Rick Neitzel, and Cynthia A. Kladden.  3M Personal Safety Division; June 26, 2015.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 3:  The Decibel Scale]]>Wed, 23 Aug 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-3-the-decibel-scale     In all likelihood, readers of this series have encountered the decibel scale many times.  It may have been used in the specifications of new machinery or personal electronic devices.  Some may be able to intuit the practical application of these values, but it is likely that many lack knowledge of the true meaning and implications of the decibel scale.
     This installment of the “Occupational Soundscapes” series introduces the decibel (dB) and its relevance to occupational noise assessment and hearing conservation.  Those with no exposure to the scale and those that have a functional understanding, but lack foundational knowledge, benefit from understanding its mathematical basis.  The characteristics of sound to which it is most-often applied is also presented to continue developing the knowledge required to effectively support a hearing loss prevention program (HLPP).
The Decibel
     Two key characteristics of the decibel scale define its use and contribute greatly to its lack of common understanding.  First, it is a logarithmic scale.  Linear scales are more common, which may lead those unfamiliar with the decibel scale to assume it, too, is linear.
     Second, the decibel scale is a comparative measure, incorporating the ratio of the measured quantity to a reference value.  Absolute scales are more common, potentially leading to another erroneous assumption.
     Making either assumption leads to gross misinterpretation of the information provided by cited values.  Mathematically, the general expression of the decibel scale is:
(all logarithms cited are base 10, log10, unless otherwise specified).  The multiplication factor of 10 converts Bels to decibels.  One Bel is defined as the increase corresponding to a tenfold increase in the ratio of values.  A decibel (dB) is, therefore, one tenth of a Bel.  The nature of the scale yields a dimensionless value that is valid for any system of units.

Sound Parameters
     To use the decibel scale effectively, in the context of occupational soundscapes, the interrelationships of power, intensity, and pressure must be understood.  Differentiating these measures is critical to understanding the true nature of the sound environment under scrutiny.
     Sound power (W), measured in watts (W), is the amount of acoustical energy produced by a sound source per unit time.  It is a characteristic of the source and is, therefore, independent of its location or surroundings.  In this discussion, it is assumed that sounds are generated by point sources, with sound dispersing spherically; variations will be introduced later.
     Sound intensity (I), measured in watts per square meter (W/m^2), is the sound power per unit area.  It is dependent on location, as it accounts for the dispersion of sound energy at a specified radial distance from the source:
The equation reveals that intensity decreases with the square of the distance from the source.  This inverse square law is depicted in Exhibit 1.
     Sound intensity, I, is a vector quantity.  In free-field conditions, however, the lack of obstructions and reflecting surfaces renders the specification of direction moot.  The intensity at a given distance from the source is equal in all directions.
     Sound pressure (P), measured in newtons per square meter (N/m^2) or, equivalently, Pascal (Pa), is the variable air pressure (force per unit area) superimposed on atmospheric pressure.  Propagation of pressure fluctuations as sound waves was introduced in Part 2; root mean square pressure (PRMS) is typically used.  Sound pressure is an effect of sound power generated by a source; it is influenced by the surrounding environment and distance from the source.
     Of the three parameters described, only pressure can be measured directly.  With adequate pressure data, however, it is possible to work backwards to obtain intensity and power values.  To do this, first calculate the RMS pressure of the sound wave.
     Sound intensity is calculated using the following formula:
where P is the RMS pressure (Pa), ρ is the density of air, and c is the speed of sound in air.
     At standard conditions, ρ = 1.2 kg/m^3; though the density of air varies, this approximation provides sufficient accuracy for most purposes.  Likewise, the approximation of c = 343 m/s will typically suffice.
      With the intensity at a known distance from the source, calculating sound power is simple:
where A is the spherical area at distance r (A = 4 Π r^2).
 
Sound Levels
     In the previous section, sound power, intensity, and pressure were discussed in absolute terms.  More often, however, these measures are referenced by their levels, using the decibel scale.  Doing so makes the very wide range of values encountered more manageable.
     The sound power level (LW or PWL) is calculated using the general expression of the decibel scale, rewritten as:
where Wref is the reference power value; Wref = 10^-12 W.
     Likewise, the sound intensity level (LI or SIL) is calculated with the general expression rewritten as:
where Iref is the reference intensity value; Iref = 10^-12 W/m^2.
     Using the expression for LI and the inverse square law, it can be shown that 6 dB of attenuation is attained by doubling the distance from the source.  Choosing an arbitrary value, (I/Iref) = 40, at distance r, we get LI(r) = 10 log (40) = 16 dB.  Doubling the distance increases the area of the hypothetical sphere by a factor of 4 (see Exhibit 1).  With power constant, this increased area reduces intensity by a factor of 4, which, in turn, reduces (I/Iref) by the same factor.  Therefore, for our example, (I/Iref) = 10 at distance 2r and we get LI(2r) = 10 log (10) = 10 dB, a reduction of 6 dB.
     Equating the two expressions for I, above, and rearranging, we get
In this form, it is easy to see that the square of pressure varies with r^2, while power and intensity (i.e. first power) vary with r^2.  Thus the general expression is rewritten for the sound pressure level (LP or SPL) as:
Pref is the reference pressure value; Pref = 2 x 10-5 N/m^2 = 20 μPa, corresponding to the threshold of human hearing at 1000 Hz.  Exhibit 2 provides examples of decibel scale levels and corresponding absolute values of sound power, intensity, and pressure.  The following should be noted in the table:
  • Each of the reference values correspond to 0 dB – when the ratio = 1, log (1) = 0.
  • Power and intensity are numerically equal at equal dB levels – numerically equal reference values are used.
  • All decimal places are shown, but small values of power and intensity are typically expressed in scientific notation (e.g. 10^-12) and small values of pressure are typically expressed in μPa.
  • Values in the table range from the threshold of human hearing to far beyond the threshold of pain (the range of human hearing will be discussed further in a future installment).
     Sound power and intensity levels are useful for acoustics projects – designing sound systems, venues, etc. – but sound pressure levels are most useful in quantifying occupational environments and supporting hearing conservation programs.  Examples of typical sound pressure levels encountered in commercial, recreational, and other settings are shown in Exhibit 3.  The “Noise Navigator,” an extensive database compiled and published by 3M Corporation, is available online.  In it, measurements of numerous sound levels are recorded, providing more useful data for research and planning purposes.

     Thus far, sounds have been treated as if generated by a singular point source in free-field conditions (no interference in spherical transmission).  Realistic soundscapes, however, are comprised of multiple complex sounds from various sources in environments where obstructions and reflective surfaces are ubiquitous.  In the next installment, the “Occupational Soundscapes” series begins to tackle the challenges of real-world conditions, presenting methods for assessing the effects of multiple simultaneous sounds on sound pressure levels.

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] An Introduction to Acoustics.  Robert H. Randall.  Addison-Wesley; 1951.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Noise Navigator Sound Level Database, v1.8.” Elliot H. Berger, Rick Neitzel, and Cynthia A. Kladden.  3M Personal Safety Division; June 26, 2015.
[Link] “Sound Intensity.”  Brüel & Kjaer; Septermber 1993.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 2:  Mechanics of Sound and the Human Ear]]>Wed, 09 Aug 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-2-mechanics-of-sound-and-the-human-ear     A rudimentary understanding of the physics of sound and the basic functions of the human ear is necessary to appreciate the significance of test results, exposure limits, and other elements of a hearing loss prevention program (HLPP).  Without this background, data gathered in support of hearing conservation have little meaning and effective protections cannot be developed and implemented.
     This installment of the “Occupational Soundscapes” series provides readers an introduction to the generation and propagation of sound and the structure and function of the human ear; it is not an exhaustive treatise on either subject.  Rather, it aims to provide a foundation of knowledge – a refresher, for many – on which future installments of the series build, without burdening readers with extraneous or potentially confusing detail.
Sound Generation and Propagation
     Sound can travel through solid, liquid, and gaseous media.  As our primary interest is in human hearing, this presentation focuses on sound propagation through air.  It should be noted, however, that vibrations in other media can be transferred to surrounding air and, therefore, ultimately perceptible by the human ear.  In fact, structure-borne noise is a prominent component of many occupational soundscapes.
     In air, sound is propagated via longitudinal pressure waves.  The movement of particles in a longitudinal wave is parallel to the wave’s direction of travel.  In contrast, particles move perpendicular to the travel direction of a transverse wave; a common example is a ripple in water.  A pressure wave consists of alternating regions of high and low pressure.  These are known as compressions and rarefactions, respectively, as the air pressure oscillates above and below ambient atmospheric pressure as portrayed in Exhibit 1.
     Sound waves are most-often referenced by their amplitude (A) and frequency (f).  A sound wave’s amplitude is the maximum pressure deviation from the ambient (μPa).  It is related to the perception of “loudness” of the sound.
     Instantaneous sound pressure is often a less useful measure than one that is time-based, such as an average.  However, the average of a sine wave’s amplitude is zero and, thus, unhelpful.  For a metric comparable across time and events, the root mean square (RMS) sound pressure (PRMS) is used.  To calculate PRMS:
(1) Consider the sound pressure waveform over a specified time period; the period of time considered is important for comparison of sound environments or events.
(2) Square (multiply by itself) the waveform (i.e. pressures) [P^2].
(3) Average the pressure-squared waveform (mean pressure squared) [P^2avg].
(4) Take the square root of the mean pressure squared (RMS pressure) [PRMS].
The steps of this process are represented graphically in Exhibit 2 and mathematically by the equation:
Squaring the pressures ensures that RMS values are always positive, simplifying use and comparison.
     A wave’s frequency is the number of complete wave cycles to pass a fixed point each second (Hz or cycles per second).  Frequency is related to the perception of a sound’s pitch.  A wave’s period (T), the time required for one complete wave cycle to pass a fixed point, is simply the inverse of its frequency:  T = 1/f (s).
     The wavelength (λ), the length of one complete wave cycle, is related to frequency and the speed of sound (c):  λ = c/f (m).  Though the speed of sound in air is influenced by temperature and density (i.e. elevation), an approximation of c = 343 m/s (1125 ft/s) is often used in lieu of calculating a more precise value.
 
     Speech is generated by forcing air through the larynx.  Movement of the vocal cords creates pressure fluctuations that manifest as complex sound waves.  These complex waves include carriers and modulation superimposed upon them.  Timing of modulation differentiates similar sounds, such as the letters “b” and “p;” therefore, resolution of these timing differences in the auditory system is integral to speech intelligibility.

     Maintaining speech communication abilities is paramount to a hearing loss prevention program (HLPP).  As such, understanding typical characteristics of voiced sounds is critical to its success.  Exhibit 3 shows the average (dark line) and range (shaded region) of sound pressures created by a small, homogeneous sample of subjects (seven adult males) reciting the same nonsensical sentence.  Though unrepresentative of the diversity of the broader population, the results are indicative of the variability that can be expected in a wider study.  A more-generalized data set is depicted in Exhibit 4, suggesting that the most-critical speech frequencies lie in the range of 170 – 4000 Hz (dB scales and the relation to speech communication will be explored in a future installment of the series).
     In addition to the voiced sounds of “normal” speech, humans generate unvoiced, or breath, sounds.  These occur when air is passed through the “vocal equipment” (i.e larynx, mouth) without activating the vocal cords.  Breath sounds, acting as low-energy carriers, make whispering possible.
 
     All sounds in an environment – wanted and unwanted – impinge on occupants indiscriminately.  It is up to the auditory system of each occupant to receive, process, resolve, differentiate, locate, and interpret these sounds collectively and/or individually as circumstances dictate.  Much of this work is performed by a sophisticated organ that often garners little attention:  the seemingly underappreciated ear.
 
Structure and Function of the Human Ear
     Exhibit 5 provides a pictorial representation of the ear’s complexity; a detailed discussion of each component is impractical in this introductory presentation.  Instead, a brief description of some critical components and their roles in the perception of sound is offered.  It won’t make “experts” of readers, but will provide the basic understanding needed to support hearing conservation efforts.
     Hearing – the perception of sound – takes place in three “stages” corresponding to the three regions of the ear:  outer, middle, and inner.  Common use of the word “ear” often refers only to the outer ear, the region highlighted in Exhibit 6.  Many times, it is intended to reference only the visible, cartilaginous portion called the pinna or auricle.  The pinna is most famous for adornment with jewelry and being the part of a misbehaving child pulled by a TV sitcom mom.
     Sound waves in the environment impinge upon the outer ear, where the pinna helps direct them into the external auditory canal, or simply ear canal.  The structure of the ear canal causes it to resonate, typically, in the range of 3 – 4 kHz, providing an amplification effect for sounds at or near its resonant frequency.
     The terminus of the outer ear is the tympanic membrane, commonly known as the eardrum.  The variable pressure of the impinging sound waves causes this diaphragm-like structure to flex inward and outward in response.
 
     In the middle ear, highlighted in Exhibit 7, the vibrational energy of the flexing eardrum is transmitted to another membrane in the inner ear via a linkage of three small bones, collectively called the ossicles or ossicular chain.  The malleus is attached to the eardrum and the stapes is attached to a membrane in the oval window of the cochlea.  Between these two lies the incus, the largest of the three bones.  The malleus, incus, and stapes are commonly known as the hammer, anvil, and stirrup, respectively.
     The configuration of the ossicular chain provides approximately a 3:1 mechanical advantage.  In conjunction with the relative sizes of the eardrum and oval window, the middle ear provides an overall mechanical advantage of approximately 15:1.  The ability to hear very soft sounds is attributed to the amplification effect produced in the middle ear.
     The Eustachian tube connects the middle ear to the nasal cavity, enabling pressure equalization with the surrounding atmosphere.  Blockage of the Eustachian tube, due to infection, for example, results in pressure deviations that can reduce hearing sensitivity, potentially to the extent of deafness.
     Two small muscles, the stapedius and the tensor tympanic, serve a protective function against very loud sounds.  These muscles act on the bones of the ossicular chain, changing its transmission characteristics to reduce the energy transmitted to the inner ear.  This protection mechanism is only available for sustained sounds, as the reflexive contraction of these muscles, known as the acoustic reflex, does not engage rapidly enough to attenuate sudden bursts of sound, such as explosions or gunshots.
 
     The inner ear, highlighted in Exhibit 8, is comprised primarily of the cochlea.  The cochlea is an extraordinary organ in its own right; its presentation here is, necessarily, an extreme simplification.  Many of its components and functional details will not be discussed, as a descriptive overview is of greater practical value with respect to hearing conservation.
     The motion of the stapes (stirrup) in the oval window induces pressure fluctuations in the fluid in the chambers of the cochlea.  The mechanical advantage provided by the middle ear serves to overcome the impedance mismatch between the air in the outer and middle ear and the liquid in the inner ear.  As mentioned previously, this maintains sensitivity to low-intensity sounds.
     The fluid movement, in turn, causes tiny hair cells in the cochlea to bend in relation to the sound energy transmitted.  These hairs are selectively sensitive to frequency; the extent of bending is proportional to the loudness of the sound.  It is this bending of hair cells that is translated into electrical signals that are sent to the brain for interpretation.  Damaging these sensitive hairs leads to reduced hearing sensitivity.  They are also nonregenerative; therefore, hearing loss caused by damaging these hairs is permanent and irrecoverable.  Though other mechanisms of damage exist, NIHL is a prominent and important one.
     The basilar membrane, separating the chambers of the cochlea, is also selectively sensitive to frequency, due to its varying mass and stiffness.  This results in the tonotopic organization of the cochlea, as depicted in Exhibit 9.  The highest frequencies (~ 20kHz) are detected near the basil end of the membrane (i.e. nearest the oval window); sensitivity shifts to progressively lower frequencies along the cochlear spiral.  Sensitivity to frequencies above ~ 2 kHz, including critical speech frequencies, is concentrated in the first 3/4 “coil” of the cochlea.  The range of human hearing and the critical speech frequencies will be discussed further in a future installment.
     The semicircular canals are highly recognizable, projecting from the cochlea’s distinct snail-like shape, but they play no significant role in hearing.  They are, however, integral to the critical function of maintaining balance which enables humans to walk upright.  Sharing a fluid supply with the cochlea, issues with hearing and balance can be correlated during a traumatic event.
     The introduction to Part 1 of the series called attention to several parallels between occupational soundscapes and thermal work environments.  The fluids contained in the cochlea may demonstrate a more-direct link.  There are two key fluids contained in chambers of the cochlea.  One, perilymph, is sodium-rich and the other, endolymph, is potassium-rich.  As discussed in the “Thermal Work Environments” series (Part 3), depletion of these electrolytes (salts) can be caused by profuse sweating and/or ingesting large quantities of water without balanced replenishment.  In addition to causing heat cramps and other afflictions, it seems heat stress could affect your hearing!

     There is a great deal more detail available from various sources to explain the mechanics of hearing, particularly the inner workings of the cochlea.  It is a fascinatingly complex organ; intrigued readers are encouraged to consult the references at the end of this post, as well as medical texts, to learn more.  Despite the requisite simplification of this presentation, sufficient information has been included to enable readers to continue on this journey of sound exploration in pursuit of the ultimate goal: effective hearing conservation practices.

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] ”29 CFR 1910.95 - Occupational noise exposure.’  OSHA.
[Link] “Noise Control Design Guide.” Owens Corning; 2004.
[Link] Engineering Noise Control – Theory and Practice, 4ed.  David A. Bies and Colin H. Hansen.  Taylor & Francis; 2009.
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Hearing Protection.”  Laborers-AGC Education and Training Fund; July 2000.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] An Introduction to Acoustics.  Robert H. Randall.  Addison-Wesley; 1951.
[Link] “How Hearing Works.”  Hearing Industries Association; 2023.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 1:  An Introduction to Noise-Induced Hearing Loss]]>Wed, 26 Jul 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-1-an-introduction-to-noise-induced-hearing-loss     Exposure to excessive noise in the workplace can have profound effects, both immediate and long-term.  Some consequences are obvious, while others may surprise those that have not studied the topic.
     Some industries, such as mining and construction, are subject to regulations published specifically for them.  This series presents information, including regulatory controls, that is broadly applicable to manufacturing and service industries.
     Several parallels exist between exposure to noise and heat stress (see the “Thermal Work Environments” series).  These include the relevance of durations of exposure and recovery, the manifestation of cognitive, as well as physical, effects on workers, and the importance of monitoring exposure and risk factors.
     To take advantage of these parallels, the “Occupational Soundscapes” series follows a path similar to that taken in the “Thermal Work Environments” series.  Terminology, physiological implications, measurement, and guidance for managing the risks are each discussed in turn.
 
Terms in Use
     The title “Occupational Soundscapes” was chosen to maintain the focus of the series on two important aspects.  First, “occupational” reminds readers that the subject matter context is the workplace.  Managing sound and preventing occupational noise-induced hearing loss (NIHL) – hearing loss caused by workplace noise – is the objective of the series.  This differentiates occupational hearing loss from other causes.  Other forms of hearing loss can occur in addition to NIHL; these include:
  • presbycusis – naturally-occuring due to aging.
  • sociocusis – caused by recreational or non-occupational activities, such as music, aviation, motorsports, or arena sports.
  • nosocusis – caused by environmental factors such as exposure to chemicals, behaviors such as drug use, or underlying health conditions such as hypertension.
These types of hearing loss are presented to provide clarity to occupational causes, but will not be discussed in detail.
     The second term of the title, “soundscapes,” serves to remind readers that workplaces are filled with a combination of sounds; some are desired, others are detrimental to working conditions.  Each contribution to the soundscape has a unique source and set of parameters.
     Much of this series focuses on the reduction, control, and protection from noise – the unwanted portion of the soundscape – but readers should not lose sight of the wanted sound.  One very important reason to control noise is to maintain accessibility of desired sounds.  Speech communication is of particular importance and is the primary focus of audiometric testing and industrial noise-control regulation.
     In its “Criteria for a Recommended Standard – Occupational Noise Exposure, Revised Criteria” (1998), NIOSH declares that its focus is on prevention of hearing loss rather than conservation of hearing.  This emphatic declaration is somewhat bizarre, as this is a distinction without a difference.  The terms are functionally equivalent, particularly in practical matters, to which “The Third Degree” is committed.  Readers will be spared a detailed explanation of why this is true; suffice to say that references to hearing loss prevention, hearing conservation, and hearing preservation are considered interchangeable.
 
     While paralleling the information presentation of the “Thermal Work Environments” series, the objectives pursued in this series will also mimic those of its predecessor series.  In brief, each installment is limited in scale and scope to be palatable to busy practitioners, easily referenced, edited, or expanded as future development requires it.  To further promote a holistic approach to job design, the two series should be read as companion pieces.  Side-by-side review of thermal and aural requirements of a workplace may reveal complementary or synergistic solutions, increasing the efficiency of industrial hygiene improvement efforts.
 
     Links to the entire series are provided at the end of this post for easy reference.
 
     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] “Criteria for a Recommended Standard - Occupational Noise Exposure, Revised Criteria 1998.”  Publication No. 98-126, NIOSH, June 1998.
[Link] ”29 CFR 1910.95 - Occupational noise exposure.’  OSHA.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
 
Directory of “Occupational Soundscapes” entries on “The Third Degree.”
Part 1:  An Introduction to Noise-Induced Hearing Loss (26Jul2023)
Part 2:  Mechanics of Sound and the Human Ear (9Aug2023)
Part 3:  The Decibel Scale (23Aug2023)
Part 4:  Sound Math (6Sep2023)
Part 5:  Audiometry (20Sep2023)
Part 6:  Measurement of Sound Exposure (4Oct2023)
Part 7:  Perceptions (10Jan2024)
Part 8:  Effects of Exposure (24Jan2024)
Part 9:  Concepts in Communication (7Feb2024)
Part 10:  Communication Systems
Part 11:
Part 12:
Part 13:
Part 14:
Part 15:
]]>
<![CDATA[Thermal Work Environments – Part 5:  Managing Conditions in Hot Environments]]>Wed, 12 Jul 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-5-managing-conditions-in-hot-environments     Safeguarding the health and well-being of employees is among the critical functions of management.  In hot workplaces, monitoring environmental conditions and providing adequate protection comprise a significant share of these responsibilities.  The details of these efforts are often documented and formalized in a heat illness prevention program.
     An effective heat illness prevention program consists of several components, including the measure(s) used for environmental assessment, exposure limits or threshold values, policies defining the response to a limit or threshold being reached, content and schedule of required training for workers and managers, and the processes used to collect and review data and modify the program.  Other information may be added, particularly as the program matures.  Though it is nominally a prevention program, response procedures, such as the administration of first aid, should also be included; the program should not be assumed to be infallible.
     In this installment of the “Thermal Work Environments” series, the components of heat stress hygiene and various control mechanisms are introduced.  Combined with the types of information mentioned above, an outline of a heat illness prevention program emerges.  This outline can be referenced or customized to create a program meeting the needs of a specific organization or work site.
     The content of a heat illness prevention program is presented in five (5) sections:
  • Training
  • Hazard Assessment
  • Controls
  • Monitoring
  • Response Plans
A comprehensive review of each would be unwieldy in this format.  Instead, an overview of the information is provided as an introduction and guide to further inquiry when one begins to develop a program for his/her team.

Training
     Every person that works in or has responsibility for a hot workplace should be trained on the ramifications of excess heat.  Information relevant to the following four sections is included in an effective training program.  Examples of important topics for all team members include:
  • basics of human biometeorology and heat balance,
  • environmental, personal, and behavioral risk factors,
  • methods used to monitor conditions,
  • controls in place to prevent heat illness,
  • signs and symptoms of heat illness, and
  • first aid and emergency response procedures.
Training of supervisors and team leaders should emphasize proper use of controls, signs and symptoms, and appropriate responses to heat illness and failure of control mechanisms.
     A complete training plan includes the content of the training and a schedule for delivery.  It may be best to distribute a large amount of information among multiple modules rather than share it in a single, long presentation.  Refresher courses of reduced duration and intensity should also be planned to combat complacency and to update information as needed.  Refreshers are particularly helpful when dangerous conditions exist intermittently or are seasonal.
 
Hazard Assessment
     An initial hazard assessment consists of identifying the elements of job design (see Part 1) that are heat-related.  These include environmental factors, such as:
  • atmospheric conditions (e.g. temperature, humidity, sun exposure),
  • air movement (natural or forced), and
  • proximity to heat-generating processes or equipment.
Also included are job-specific attributes, such as:
  • intensity of work (i.e. strenuousness and rate),
  • personal protective equipment (PPE) and other gear required, and
  • availability of potable water and a cool recovery area.
Other relevant factors may also be identified.  A compound effect could be discovered, for example, between concentration required for task performance and an increase in heat stress due to resultant anxiety.
     Using the information collected in the hazard assessment, a risk profile can be created for each job.  The risk profile is then used to prioritize the development of controls and modifications to the job design.
 
Controls
     Similar to that for quality [see “The War on Error – Vol. II:  Poka Yoke (What Is and Is Not)” (15Jul2020)], there is an implied hierarchy of controls used to manage heat-related effects on workers.  Engineering controls modify the tasks performed or the surrounding conditions, while administrative controls guide workers’ behavior to reduce heat stress.  Finally, personal protective equipment (PPE) is used to manage heat stress that could not otherwise be reduced.  PPE is often the first protection implemented and is used until more-effective controls are developed.
     A comprehensive heat stress control plan is developed by considering each term in the heat balance equation (see Part 2).  Examples of engineering controls include:
  • To reduce metabolic heat generation, M, provide lift assists, material transport carts, or other physical aids to limit workers’ exertion.
  • To reduce radiative heat load, R, install shields between heat sources (e.g. furnaces or other hot equipment) and workers, just as an umbrella is used to block direct sunlight.
  • Use fans to increase evaporative cooling, E, when air temperature is below 95° F (35° C).
  • Reduce air temperature with water-mist systems if relative humidity (RH) is below 50% and general air conditioning is not practical.
     Administrative control examples include:
  • Establish policies that limit work during periods of excessive heat; thresholds can be based on Heat Index (HI), Wet Bulb Globe Temperature (WBGT), or other index.  The American Conference of Governmental Industrial Hygienists (ACGIH) regularly publishes threshold limit values (TLVs) based on WBGT with adjustments for clothing and work/rest cycles.  ACGIH TLVs often serve as the basis for standards and guidelines developed by other organizations and government agencies.
  • Reduce M by increasing periods of rest in the work cycle.
  • Implement an acclimatization program for new and returning workers that allows them to develop “resistance” to heat stress.
  • Encourage proper hydration; ensure availability of cool potable water.
     Heat-related PPE examples include:
  • Reflective clothing to reduce radiative heat load, R.
  • A vest cooled with ice, forced air, or water increases conductive, K, and/or convective, C, heat loss.
  • Bandana, hat or similar item that can be wetted to enhance evaporative cooling, E, particularly from the head and neck.
  • Hydration backpack.
     Many controls are used in conjunction to achieve maximum effect.  Tradeoffs must be considered to ensure that the chosen combination of controls is the most-effective.  For example, cooling with a water-mist system increases humidity; if it begins to inhibit evaporation from skin, its use may be inadvisable.
 
Monitoring
     Monitoring is a multifaceted activity and responsibility.  In addition to measuring environmental variables, the effectiveness of controls and the well-being of workers must be continually assessed.  A monitoring plan includes descriptions of the methods used to accomplish each.
     Measurement of environmental variables is the subject of Part 4 of this series.  As discussed in that installment, multiple indices may be used to inform decisions regarding work cycle modifications or stoppages.  Those used in popular meteorology, such as Heat Index (HI), are often insufficient to properly characterize workplace conditions; however, they can be useful as early warnings that additional precautions may be needed to protect workers during particularly dangerous periods.  See “Heat Watch vs. Warning” for descriptions of alerts that the National Weather Service (NWS) issues when dangerous temperatures are forecast.
     After controls are implemented, they must be monitored for proper use and continued effectiveness.  This should be done on an ongoing basis, though a formal report may be issued only at specified intervals (e.g. quarterly) or during specific events (e.g. modification of a control).  Verification test procedures should be included in the monitoring plan to maintain consistency of tests and efficacy of controls.
     Monitoring the well-being of workers is a responsibility shared by a worker’s team and medical professionals.  Prior to working in a hot environment, each worker should be evaluated on his/her overall health and underlying risk factors for heat illness.  An established baseline facilitates monitoring a worker’s condition over time, including the effectiveness of acclimatization procedures and behavioral changes.
     Suggestions for behavioral changes, or “lifestyle choices,” can be made to reduce a worker’s risk; these include diet, exercise, consumption of alcohol or other substances, and other activities.  Recommendations to an employer regarding one’s fitness for certain duties, for example, must be made in such a way that protects both safety and privacy.  Heat-related issues may be best addressed as one component of a holistic wellness program such as those established by partnerships between employers, insurers, and healthcare providers.
 
Response Plans
     There are three (3) response plans that should be included in a heat illness prevention program.  Somewhat ironically, two of them are concerned with heat illness that was not prevented.
     The first response plan details the provisioning of first aid and subsequent medical care when needed.  Refer to Part 3 for an introduction to heat illnesses and first aid.
     The second outlines the investigation required when a serious heat illness or heat-related injury or accident occurs.  The questions it must answer include:
  • Were defined controls functioning and in proper use?
  • Had the individual(s) involved received medical screening and been cleared for work?
  • Had recommendations from prescreens been followed by individual(s) and the organization?
  • Had the individual(s) been properly acclimatized?
  • Were special circumstances involved (e.g. heat advisory, other emergency situation, etc.)?
The investigation is intended to reveal necessary modifications to the program to prevent future heat illness or heat-related injury.
     The final response plan needed defines the review process for the heat illness prevention program.  This includes the review frequency, events that trigger additional scrutiny and revision, and required approvals.
 
 
     Currently, management of hot work environments is governed by the “General Duty Clause” of the Occupational Safety and Health Act of 1970.  The General Duty Clause provides umbrella protections for hazards that are not explicitly detailed elsewhere in the regulations.  It is a generic statement of intent that provides no specific guidance for assessment of hazards or management of risks.
     In 2021, OSHA issued an “advance notice of proposed rulemaking” (ANPRM) to address this gap in workplace safety regulations.  A finalized standard, added to the Code of Federal Regulations (CFR), will add specific enforcement responsibilities to OSHA’s current role of education and “soft” guidance on heat-related issues.
     That an OSHA standard will reduce heat-related illness and injury is a reasonable expectation.  However, it must be recognized that it, too, is imperfect.  No standard or guideline can account for every person’s unique experience of his/her environment; therefore, an individual’s perceptions and expressions of his/her condition (i.e. comfort and well-being) should not be ignored.  A culture of autonomy, or “self-determination,” where workers are self-paced, or retain other responsibility for heat stress hygiene, is one of the most powerful tools for safety and health management imaginable.
 
 
     For additional guidance or assistance with complying with OSHA regulations, developing a heat illness prevention program, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “NIOSH Criteria for a Recommended Standard Occupational Exposure to Heat and Hot Environments.”  Brenda Jacklitsch, et al.  National Institute for Occupational Safety and Health (Publication 2016-106); February 2016.
[Link] “Threshold Limit Values for Chemical Substances and Physical Agents.”  American Conference of Governmental Industrial Hygienists (ACGIH); latest edition.
[Link] “National Emphasis Program – Outdoor and Indoor Heat-Related Hazards.”  Occupational Safety and Health Administration (OSHA); April 8, 2022.
[Link] “Ability to Discriminate Between Sustainable and Unsustainable Heat Stress Exposures—Part 1:  WBGT Exposure Limits.”  Ximena P. Garzon-Villalba, et al.  Annals of Work Exposures and Health;  June 8, 2017.
[Link] “Ability to Discriminate Between Sustainable and Unsustainable Heat Stress Exposures—Part 2:  Physiological Indicators.”  Ximena P. Garzon-Villalba, et al.  Annals of Work Exposures and Health;  June 8, 2017.
[Link] “The Thermal Work Limit Is a Simple Reliable Heat Index for the Protection of Workers in Thermally Stressful Environments.”  Veronica S. Miller and Graham P. Bates.  The Annals of Occupational Hygiene; August 2007.
[Link] “Thermal Work Limit.”  Wikipedia.
[Link] “The Limitations of WBGT Index for Application in Industries: A Systematic Review.”  Farideh Golbabaei, et al.  International Journal of Occupational Hygiene; December 2021.
[Link] “Evaluation of Occupational Exposure Limits for Heat Stress in Outdoor Workers — United States, 2011–2016.”  Aaron W. Tustin, MD, et al.  Morbidity and Mortality Weekly Report (MMWR).  Centers for Disease Control and Prevention; July 6, 2018.
[Link] “Occupational Heat Exposure. Part 2: The measurement of heat exposure (stress and strain) in the occupational environment.”  Darren Joubert and Graham Bates.  Occupational Health Southern Africa Journal; September/October 2007.
[Link] “Heat Stress:  Understanding factors and measures helps SH&E professionals take a proactive management approach.”  Stephanie Helgerman McKinnon and Regina L. Utley.  Professional Safety; April 2005.
[Link] “The Heat Death Line: Proposed Heat Index Alert Threshold for Preventing Heat-Related Fatalities in the Civilian Workforce.”  Zaw Maung and Aaron W. Tustin.  NEW SOLUTIONS: A Journal of Environmental and Occupational Health Policy; June 2020.
[Link] “Loss of Heat Acclimation and Time to Re-establish Acclimation.”  Candi D. Ashley, John Ferron, and Thomas E. Bernard.  Journal of Occupational and Environmental Hygiene; April 2015.
 

Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 4:  A Measure of Comfort in Hot Environments]]>Wed, 28 Jun 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-4-a-measure-of-comfort-in-hot-environments     Since the early 20th century, numerous methods, instruments, and models have been developed to assess hot environments in absolute and relative terms.  Many people are most familiar with the “feels like” temperature cited in local weather reports, though its method of determination can also vary.  Index calculations vary in complexity and the number of included variables.
     Despite the ever-improving accuracy and precision of instrumentation, heat indices remain models, or approximations, of the effects of hot environments on comfort and performance.  The models may also be applicable only in a narrow range of conditions.  When indices are routinely cited by confident “experts,” without qualifying information, those in the audience may attribute greater value to them than is warranted.
     Incorporating the range of possible environmental conditions and human variability requires an extremely complex model, rendering its use in highly-dynamic workplaces infeasible.  Though imperfect, there are models and methods that can be practically implemented for the protection of workers in hot environments.
     A comprehensive presentation of heat stress modeling is far beyond the scope of this series.  Instead, this installment presents two types of indices:
(1) indices that are familiar to most, such as those used in weather reports, and
(2) practical assessments of hot work places; i.e. indices derived from noninvasive measurement of environmental conditions.
 
Popular Meteorology
     Short-term weather forecasts are concerned, largely, with predicting levels of comfort.  Forecasters use temperature indices to convey what the combination of conditions “feels like” relative to a reference set of conditions (e.g. dry bulb temperature at 50% relative humidity).  Methods of calculation and individuals’ perceptions vary; thus, temperature indices are generally more reliable as temporal comparisons than geographical ones.
     The “apparent temperature” (AT) exemplifies the temporal utility and geographical futility of such indices.  AT has been defined in various ways, hindering meaningful aggregation of data.  This slip into generic use of the term also precludes a detailed discussion here; however, AT can still be a valuable reference in some applications.  If approximations are sufficient, a simple look-up table, such as that in Exhibit 1, can be used for rapid reference.
     As seen in the table, this formulation of apparent temperature accounts only for ambient temperature and relative humidity (RH).  Readers unfamiliar with meteorological instrumentation or terminology can interpret “dry bulb temperature,” Tdb, as “reading from standard thermometer.”
 
     Heat Index (HI), used by the National Weather Service (NWS), incorporates several variables in the calculation of apparent, or perceived, temperature.  Derived by multiple regression analysis (far beyond the scope of this series), calculation of HI has been simplified by choosing constant values for all variables except dry bulb temperature (Tdb) and relative humidity (RH).  To maintain brevity in this presentation, the selection of these values will not be discussed; practical application is not hampered by this omission.  Simplifications notwithstanding, calculation of Heat Index remains a multi-step process.
     First, the “simple” Heat Index equation is used:
     HI1 = 0.5 { Tdb + 61.0 + [(Tdb – 68.0) * 1.2] + (0.094 * RH)},
where Tdb is measured in degrees Fahrenheit (° F) and RH is given in percent (%).  If HI1 ≥ 80° F, the “full” Heat Index equation is used and required adjustments are applied.
     The full Heat Index equation incorporates the constant values selected for constituent variables.  Using temperatures measured on the Fahrenheit scale (subscript “F”),
     HIF = -42.379 + 2.04901523 * Tdb + 10.14333127 * RH – 0.22475541 * Tdb * RH – 0.00683783 * Tdb^2– 0.05481717 * RH^2 + 0.00122874 * Tdb^2 * RH + 0.00085282 * Tdb * RH^2 – 0.00000199 * Tdb^2 * RH^2 .
Using temperatures measured on the Celsius scale (subscript “C”),
     HIC = -8.78469476 + 64.4557644 * Tdb + 93.54195356 * RH – 233.78568 * Tdb * RH – 19.6929504 * Tdb^2 – 26.2797244 * RH^2 + 141.550848 * Tdb^2 * RH + 46.42944 * Tdb * RH^2 – 9.16992 * Tdb^2 * RH^2 .
     Under certain conditions, an adjustment to the calculated HI is needed.  When RH < 13% and 80 < Tdb (° F) < 112 [26.7 < Tdb (° C) < 44.5], the following adjustment factor is added to the calculated HI:
     Adj1F = -{[(13 – RH)/4] * SQRT([17 - | Tdb – 95|]/17)} or
     Adj1C = -{[(13 – RH)/7.2] * SQRT([17 - |1.8 *  Tdb – 63|]/17)} .
When RH > 85% and 80 < Tdb (° F) < 87 [26.7 < Tdb (° C) < 30.5], the following adjustment factor is added to the calculated HI:
     Adj2F = [(RH – 85)/10] * [(87 – Tdb)/5] or
     Adj2C = 0.02 * (RH – 85) * (55 – 1.8 * Tdb)/1.8 .
     Limitations of the Heat Index equation extend beyond complexity of computation.  For HI < 80° F or 26° C, the full equation loses validity; the simple formulation is more useful.  Its derivation via multiple regression yields an error of ± 1.3° F (0.7° C), though this accuracy is usually sufficient for weather forecasts, as the geographical variation may exceed this amount.  Exposure to direct sunlight (insolation) can increase HI values up to 15° F (8° C), though the actual amount in given conditions is indeterminate in this model.  Constants chosen for constituent variables may also limit utility of HI in real conditions, should they vary significantly from assumptions.
     The preceding presentation of HI was intended to develop some appreciation for the potential complexity and limitations of temperature indices.  In reality, practical application requires none of this.  NWS provides a simple interface to input Tdb and RH values and quickly obtain HI values.  It can be found at www.wpc.ncep.noaa.gov/html/heatindex.shtml.  Links to other information are also provided for interested readers.
 
     The Canadian Meteorological Service uses humidex to express apparent temperatures.  This “humidity index,” like HI, incorporates temperature and humidity; it is calculated as follows:
     Humidex = Tdb + 0.5555 * {6.11 * e^(5417.7530 * [1/273.16 – 1/ (Tdp + 273)]) - 10},
where Tdp is the dewpoint temperature (° C).  Alternatively,
     Humidex = Tdb + 0.5555 * (Pv – 10),
where Pv is the vapor pressure (hPa).  If vapor pressure data is available, calculation of humidex is obviously simpler; however, dewpoint temperatures are likely more readily attainable.
     Like their counterparts in the US, the Canadians save everyone the trouble of computing the index.  A humidex calculator is provided at weather.gc.ca/windchill/wind_chill_e.html.
 
Advanced Measures
     Presenting a detailed review of the numerous temperature indices and models would be contradictory to the objectives declared in Part 1.  The following discussion is limited to those with practical application to hot workplace environments.
     The most ubiquitous index is the Wet Bulb Globe Temperature (WBGT).  This index combines dry bulb (Tdb), wet bulb (Twb), and black globe (Tg) temperatures to compute an apparent temperature.  The component measurements represent ambient temperature, evaporative cooling, and radiant heat transfer, respectively.  The combination of measurements provides a better approximation of the effects of environmental conditions on the human body than is available from the meteorological indices discussed in the previous section.
     For outdoor environments with a solar load component,
     WBGTout = (0.7 * Twb) + (0.1 * Tdb) + (0.2 * Tg).
For environments with no solar load component (e.g. indoors), the calculation is reduced to
     WBGTin = (0.7 * Twb) + (0.3 * Tg).
Estimating WBGT, with adjustments for air movement and clothing, can be accomplished using the table and procedure described in Exhibit 2.
     For best results, an instrument that complies with a broadly-accepted standard, such as ISO 7243, should be used to maintain consistency and comparability of data.  The standard defines characteristics of the globe, proper bulb wetting, and other information needed to use the instrument effectively.
     WBGT has been criticized for being “overly conservative.”  That is, restrictions placed on work rates and schedules based on WBGT limits have been deemed more protective than necessary to maintain worker health to the detriment of productivity.  Such criticisms of WBGT have led some to advocate for its use only as a screening tool.  Research, judgment, and multiple indices can be used to make this determination for specific circumstances and establish appropriate policies and procedures.
 
     Wet Globe Temperature (WGT) is comparable to WBGT.  It uses a copper globe covered with a wetted black cloth, called a Botsball, in place of the separate instruments of the WBGT apparatus.  A conversion has been derived to obtain WBGT from Botsball measurements:
     WBGT = 1.044 * WGT – 0.187° C.
 
     The Thermal Work Limit (TWL) has gained acceptance in some industries, such as mining, and could become common in others.  In particular, it is useful in outdoor environments with a significant contribution to heat stress attributable to radiant sources.  Advocates claim that TWL addresses the deficiencies of WBGT and is, therefore, a more-reliable indicator of heat stress.
     The TWL is the maximum metabolic rate (W/m^2) that can be sustained while maintaining a safe core temperature [< 100.4° F (38° C)] and sweat rate [< 1.2 kg/hr (42 oz/hr)].  It is determined using five environmental factors:  Tdb, Twb, Tg, wind speed (va), and atmospheric pressure (Pa).  TWL is based on assumptions that individuals are euhydrated, clothed, and acclimated to the conditions.
     A series of calculations is needed to determine TWL.  Rather than derail this discussion with a lengthy presentation of equations, readers are encouraged to familiarize themselves by using a calculator, such as that provided by Cornett’s Corner.  Preliminary calculations can be made with estimated metabolic rates; a guideline is provided in Exhibit 3.
     Body surface area (Ab) is determined by the following calculation:
     Ab (m^2) = [weight (kg)]^0.425 * [height (cm)]^0.725 * 71.84 .
Finally, divide the metabolic rate by Ab and compare the result to TWL.  If TWL is exceeded, additional breaks in the work cycle are needed to maintain worker well-being.
 
     Other measures of heat stress and related risk are concerned with sweating and hydration.  Skin wittedness, predicted sweat loss, and required sweating determinations are more academic than practical.  Weight loss due to sweating of less than 1.5% of body weight indicates adequate hydration, but isolating the cause of weight variation in a workplace is not straightforward.
     The specific gravity of one’s urine is a more-reliable indication of a person’s hydration, but, again, collecting this data is not feasible in most workplaces.  The practical alternative is less scientific, less precise, but simple to implement on an individual basis.  The color of a person’s urine can warn of his/her worsening hypohydration and potential for heat illness (see Part 3).  A visual guideline for evaluation is provided in Exhibit 4.
Let’s Be Direct
     Three types of temperature or heat stress indices have been developed for various purposes.  Some are used to assess or predict comfort levels in noncritical environments, while others are used to protect workers from heat illness, extract maximum performance from an individual or battalion, or in pursuit of other consequential objectives.
     Rational indices are based on the heat balance equation (see Part 2).  These are the most accurate because they account for all mechanisms of heat transfer between the human body and its surroundings.  For the same reason, however, rational indices are also the most difficult to develop; the measurements required are infeasible outside a controlled research environment.  Their complexity and subsequent lack of practicality has excluded rational indices from this presentation.
     Two fatal flaws have excluded empirical indices from this presentation:  self-reported data and subjectivity of assessments.  Self-reported data is notoriously unreliable, as imperfect memory, motivated thinking, or other influences cause distortions in the record.  Subjective assessments have low repeatability that introduces large errors in results.  These flaws render empirical indices impractical for use in workplaces where consistent policies and procedures are required.
     Thus, we must rely on direct indices to assess workplace conditions.  Direct indices are derived from measurements of environmental parameters.  Such measurements are noninvasive; they do not interrupt workflows or require participation or attention from workers.
     Heat Index (HI), humidex, Wet Bulb Globe Temperature (WBGT), and Wet Globe Temperature (WGT) are direct indices of varying complexity.  The Thermal Work Limit (TWL) requires body dimensions, but these measurements need not be repeated frequently.  Metabolic work rates can be estimated, maintaining the noninvasive nature of the index.
 
     The indices presented in this installment are only a sample of a much wider array of heat stress models available.  Investigation of others may be necessary to develop confidence in the protection a chosen index affords workers.  Any effort to better understand the landscape of heat stress risks, evaluations, and countermeasures are worthwhile investments in worker safety.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Human Factors in Technology.  Edward Bennett, James Degan, Joseph Spiegel (eds).  McGraw-Hill Book Company, Inc., 1963.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “NIOSH Criteria for a Recommended Standard Occupational Exposure to Heat and Hot Environments.”  Brenda Jacklitsch, et al.  National Institute for Occupational Safety and Health (Publication 2016-106); February 2016.
[Link] “Thermal Indices and Thermophysiological Modeling for Heat Stress.”  George Havenith and Dusan Fiala.  Comprehensive Physiology; January 2016.
[Link] “The Heat Index Equation.”  National Weather Service Weather Prediction Center; May 12, 2022.
[Link] “What is a Heat Stress Index?”  Ross Di Corleto.  The Thermal Environment; February 22, 2014.
[Link] “Three instruments for assessment of WBGT and a comparison with WGT (Botsball).”  B. Onkaram, L. A. Stroschein, and R. F. Goldman.  American Industrial Hygiene Association Journal; June 4, 2010.
[Link] “The Assessment of Sultriness. Part I: A Temperature-Humidity Index Based on Human Physiology and Clothing Science.”  R.G. Steadman.  Journal of Applied Meteorology and Climatology; July 1979.
[Link] “The Assessment of Sultriness. Part II: Effects of Wind, Extra Radiation and Barometric Pressure on Apparent Temperature.”  R.G. Steadman.  Journal of Applied Meteorology and Climatology; July 1979.
[Link] “Globe Temperature and Its Measurement: Requirements and Limitations.”  A. Virgilio, et al.  Annals of Work Exposures and Health; June 2019.
[Link] “Heat Stress Standard ISO 7243 and its Global Application.”  Ken Parsons.  Industrial Health; April 2006.
[Link] “Heat Index.”  Wikipedia.
[Link] “Thermal Work Limit.”  Wikipedia.
[Link] “Thermal comfort and the heat stress indices.”  Yoram Epstein and Daniel S. Moran.  Industrial Health; April 2006.
[Link] “The Thermal Work Limit Is a Simple Reliable Heat Index for the Protection of Workers in Thermally Stressful Environments.”  Veronica S. Miller and Graham P. Bates.  The Annals of Occupational Hygiene; August 2007.
[Link] “The Limitations of WBGT Index for Application in Industries: A Systematic Review.”  Farideh Golbabaei, et al.  International Journal of Occupational Hygiene; December 2021.
[Link] “The Heat Index ‘Equation’ (or, more than you ever wanted to know about heat index) (Technical Attachment SR 90-23).”  Lans P. Rothfusz.  National Weather Service; July 1, 1990.
[Link] “Evaluation of Occupational Exposure Limits for Heat Stress in Outdoor Workers — United States, 2011–2016.”  Aaron W. Tustin, MD, et al.  Morbidity and Mortality Weekly Report (MMWR).  Centers for Disease Control and Prevention; July 6, 2018.
[Link] “Occupational Heat Exposure. Part 2: The measurement of heat exposure (stress and strain) in the occupational environment.”  Darren Joubert and Graham Bates.  Occupational Health Southern Africa Journal; September/October 2007.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 3:  Heat Illness and Other Heat-Related Effects]]>Wed, 14 Jun 2023 05:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-3-heat-illness-and-other-heat-related-effects     When the human body’s thermoregulatory functions are unable to maintain heat balance in a hot environment, any of several maladies may result.  Collectively known as “heat illness,” these maladies vary widely in severity.  Therefore, a generic diagnosis of heat illness may provide insufficient information to assess future risks to individuals and populations or to develop effective management plans.
     This installment of the “Thermal Work Environments” series describes the range of heat illnesses that workers may experience.  This information can be used to identify risk factors and develop preventive measures.  It also facilitates effective monitoring of conditions, recognition of symptoms, and proper treatment of heat-effected employees.
Heat-Related Illness
     The following descriptions of heat-related illnesses are presented in order of increasing severity, though individual sensitivities and proclivities renders this sequence an approximation.  Also, it should not be assumed that these illnesses will always be experienced in the same way.  For some, symptoms of lower-level illness may not be present, or detectable, prior to onset of more-severe heat illness.  However, when symptoms of “minor” illness do appear, they must be given prompt attention to prevent the person’s physical condition from degrading further.

Warm Discomfort
     While not truly a heat illness itself, the initial discomfort experienced due to heat is the first warning sign of impending heat stress and that management of the thermal environment may be necessary.  Although a person’s experience does not always reflect a clear progression of effects, warm discomfort is a likely predecessor to subsequent heat illness.

Dehydration
     Although dehydration can occur in any environmental conditions, it is most-often associated with heat stress.  Elevated temperature accelerates fluid loss; a demanding work cycle can limit a person’s opportunities to rehydrate or his/her conscious awareness of the need.  These demands exacerbate physical conditions that may exist upon entering the work environment.  Specifically, beginning work with a suboptimal hydration level (i.e. hypohydration) increases a worker’s risk of severe dehydration or heat illness.

Heat Rash
     Sometimes called “prickly heat,” heat rash is a common occurrence in hot work environments; it occurs during profuse sweating.  It is caused by sweat ducts becoming blocked, forcing sweat into surrounding tissue.  It is characterized by clusters of small blisters that give the skin a bumpy, red, or pimply, appearance.  It appears most often on the neck, upper chest, and anywhere that skin touches itself, such as elbow joint creases, or where excretion of sweat is otherwise restricted.
     Dismissing heat rash as an aesthetic affliction is a mistake.  It is an indication that thermoregulatory function has been inhibited to some degree and should not be ignored.  Unchecked, it could accelerate onset of more-severe heat illness.
     The most effective response to heat rash is to move to a cooler, less humid environment.  Unfortunately, this is not often a realistic option.  Therefore, the person’s overall condition should be monitored to prevent worsening illness.  The area of the rash should be kept dry; powder may be applied for comfort, but anything that warms or moistens the skin should be avoided.

Heat Cramps
     Uncontrolled contractions or spasms, usually in the legs or arms, often result from the loss of fluid or salts when sweating.  Strong, painful muscle contractions are possible, even when a person has been drinking water.  If body salts, such as sodium and potassium, are depleted without replenishment, heat cramps are often the result.
     To offset the effects of profuse sweating, electrolyte-replacement drinks (i.e. “sports drinks”) should be added to the hydration regimen.  Eating an occasional snack is an alternate method of salt replenishment that may better serve a worker’s energy requirements than liquids alone.

Heat Syncope
     Syncope is the occurrence of dizziness, lightheadedness, or fainting.  Onset is usually caused by standing for an extended period of time or suddenly rising from a seated or prone position.  Dehydration and lack of acclimation to the hot environment may be contributing factors to the occurrence of heat syncope.

Heat Exhaustion
     It may be reasonable to address the heat-related illnesses previously discussed without medical attention beyond the assistance provided by coworkers.  A case of heat exhaustion, however, warrants professional medical care to ensure proper treatment and recovery.
     Heat exhaustion is caused by extreme dehydration and loss of body salts.  It is characterized by several possible symptoms, including headache, nausea, thirst, irritability, confusion, weakness, and body temperature exceeding 100.4° F (38° C).
     First aid for heat exhaustion includes moving the person to a cooler environment and encouraging him/her to take frequent sips of cool water.  Apply cold compresses to the person’s head, neck, and face; if cold compresses are not available, rinse the same areas with cold water.  Unnecessary clothing, including shoes and socks, should be removed; this is particularly important if the person wears impermeable protective layers, such as a chemical-resistant smock, leather garment or boots, etc.  At least one person should stay with the stricken worker, continuing the actions described, until s/he is placed in the care of medical professionals.  At such time, provide all pertinent information to expedite effective treatment.

Rhabdomyolysis
     Protracted physical exertion under heat stress can cause muscles to break down, releasing electrolytes, primarily potassium, and proteins into the bloodstream.  An elevated level of potassium can cause dangerous heart rhythms and seizures and large protein molecules can cause kidney damage.
     Symptoms of rhabdomyolysis include muscle pain and cramps, swelling, weakness, reduced range of motion, and dark urine.  There is an elevated risk of misdiagnosis due to the similarity of the commonly-experienced symptoms to those of less-severe afflictions.  Tests can be performed to ensure proper diagnosis and reduce the risk of future complications.

Acute Kidney Injury
     One cause of kidney damage, as mentioned above, is the release of proteins from muscles that the kidneys are unable to process effectively.  It may also occur as a result of prolonged heavy sweating.  Low fluid and sodium levels (hypohydration and hyponatremia, respectively) impede normal renal function.  Unresolved, this can lead to kidney failure and the need for dialysis.  An effective hydration regimen is critical to kidney health.

Heat Stroke
     When the body’s thermoregulatory functions can no longer manage the heat stress to which it is subjected, heat stroke is the ultimate result.  Onset of heat stroke is typically characterized by hot, dry skin and body temperature exceeding 104° F (40° C).  The victim may also be confused or disoriented, slur speech, or lose consciousness.  Rapid, shallow breathing and seizures are also potential symptoms of heat stroke.
     Two types of heat stroke are possible:  classic and exertional.  The two are differentiated by several factors, summarized in Exhibit 1.  The key distinction is that classic heat stroke occurs during activity of much lower intensity than that inducing exertional heat stroke.  Sweating often continues during exertional heat stroke, eliminating an easily-identifiable symptom and potentially causing dangerous underestimation of the severity of a victim’s condition.
     First aid for both types of heat stroke is very similar to that for heat exhaustion, though more aggressive.  Additional cold compresses should be applied, particularly to the armpits and groin.  More thorough soaking with cold water, or in an ice bath, with increased air movement, should be provided to the extent possible.  Emergency medical care is a necessity for every heat stroke victim.

Death
     Undiagnosed or untreated heat illness can escalate rapidly.  Ignoring early warning signs places all workers in a hot environment at greater risk of heat stroke or other serious injury.  With a mortality rate of ~80%, heat stroke victims require immediate attention to have any hope of recovery; a body temperature exceeding 110° F (43.3° C) is almost always fatal. 
     Hot environments pose a greater risk to life than do cold environments.  There are three key reasons for this:
  1. Normal body temperature (98.6° F/37° C on average) is much closer to the safe upper limit (~104° F/40° C) than to the lower limit (~77° F/25° C).
  2. All external sources of heat load are in addition to metabolic heat, which is continually generated.
  3. “Excessive motivation,” whether positive (e.g. intrinsic desire to perform) or negative (e.g. avoidance of punitive action), can cause a person to ignore symptoms and warning signs of developing heat illness.
     Survivors of heat stroke often suffer from damage to vital organs, such as heart, kidneys, and brain.  Injuries associated with heat stroke typically require life-long vigilance in medical care and may be a victim’s ultimate cause of death.

Other Heat-Related Effects
     There are risks associated with hot work environments that are not adequately described in the “traditional” sense of heat illness.  A workplace with a radiant heat source (other than the sun) places workers at risk of burns.  The source of radiant heat is a hot object, often a furnace, forge, or other process equipment.  Workers may be required to be in close proximity to such equipment to operate or interact with it, such as when loading or unloading material.  A small misstep could cause a person to come in contact with the equipment or heated material.  Even with protective gear in proper use, direct contact could result in a severe burn.
     Thus far, the afflictions discussed have been physical in nature.  However, there are also heat-related cognitive effects to consider.  Tests conducted on subjects under heat stress have demonstrated the potential for significant cognitive impairment during extended exposure.
     Test subjects experienced reductions in working memory and information-processing capability.  Other results showed that stimulus-response times and error rates increased, with a subsequent increase in total task time.  Performance of complex tasks was effected to a greater degree than simple tasks, suggesting that subjects’ ability to concentrate had been negatively impacted by prolonged heat stress.
     The potential effect on productivity and quality of impaired task performance is easy to infer.  Less obvious, perhaps, is the increased risk of injury that results from reduced information-processing capability and increased reaction time.  Any lag in recognizing a dangerous condition, formulating an appropriate response, and executing it significantly increases risk to personnel and property.
 
     Discussion of each of the heat-related illnesses and other effects has implicitly referenced the time during a work shift.  The time between work shifts is also critically important to the well-being of workers returning to a hot environment day after day.
     The duration of the gap between shifts and the activities in which a person engages during that time determine his/her condition at the beginning of the next shift.  For optimum health and performance in subsequent shifts, workers should consider the following recovery plan elements:
  • Spend the “downtime” in a cool, dry (i.e. air-conditioned) environment.
  • Replenish fluids (“euhydrate”) and salts, with attention paid to achieving a proper balance.  Avoid alcoholic and heavily-caffeinated beverages.
  • Reduce physical activity, allowing the heart, muscles, and brain to recover.
  • Get plenty of rest; the human body “rebuilds” while sleeping.
Risk Factors
     Workers in hot environments are exposed to a number of risk factors for heat-related illness.  The diagram in Exhibit 2 names a baker’s dozen of them; several have already been discussed in this series.  For example, the presentation of the heat balance equation in Part 2 included discussion of several of these factors, including temperature and humidity, radiant heat sources, physical exertion, and medications.  Others are discussed further in subsequent installments of the “Thermal Work Environments” series.
     The effects of heat stress range from mild to severe, even fatal.  Protection from heat illness begins with a cohesive team, whose members look after one another and respond appropriately to the earliest signs of onset.  To be effective guardians of their own health and that of their teammates, workers must possess an understanding of heat illness, common symptoms, and first aid treatments.  Tolerance to heat varies widely among individuals; the ability to recognize changes in a person’s condition or behavior, in the absence of sophisticated monitoring systems, is paramount.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Human Factors in Technology.  Edward Bennett, James Degan, Joseph Spiegel (eds).  McGraw-Hill Book Company, Inc., 1963.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “NIOSH Criteria for a Recommended Standard Occupational Exposure to Heat and Hot Environments.”  Brenda Jacklitsch, et al.  National Institute for Occupational Safety and Health (Publication 2016-106); February 2016.
[Link] “Occupational Heat Exposure. Part 1: The physiological consequences of heat exposure in the occupational environment.”  Darren Joubert and Graham Bates.  Occupational Health Southern Africa Journal; September/October 2007.
[Link] “Workers' health and productivity under occupational heat strain: a systematic review and meta-analysis.”  Andreas D. Flouris, et al.  The Lancet Planetary Health; December 2018.
[Link] “Evaluating Effects of Heat Stress on Cognitive Function among Workers in a Hot Industry.”  Adel Mazloumi, Farideh Golbabaei, et al.  Health Promotion Perspectives; December 2014.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 2:  Thermoregulation in Hot Environments]]>Wed, 31 May 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-2-thermoregulation-in-hot-environments     The human body reacts to exposure to – and generation of – heat by activating various system responses.  The nervous, cardiovascular, respiratory, and exocrine systems are key players in the physiological behavior of workers subject to heat stress.  Effective thermoregulation requires that these systems operate in highly-interconnected ways.
     This installment of the “Thermal Work Environments” series provides an overview of the human body’s thermoregulatory functions that are activated by heat stress and introduces the heat balance equation.  Each component of the heat balance equation is described in terms of physiological and environmental factors that impact thermoregulation.
Thermoregulatory Function
     Core body temperature is regulated by the hypothalamus, located at the base of the brain (see Exhibit 1); its functions are divided between two areas.  The anterior hypothalamus manages heat-dissipative functions, such as vasodilation and sweat production.
     Vasodilation results in increased blood flow to the outer regions of the body, transferring heat from the core to the skin.  A corresponding rise in heart rate increases the rate of heat transfer from the core to extremities.
     Rising skin temperature prompts sweat production.  Evaporation of sweat from the skin is the largest contributor to heat loss from the body; improving its efficiency is a common goal in hot environments.  It is also the reason that proper hydration is critical to maintaining well-being in a hot environment.
     Respiration also contributes to heat loss, as inhaled air is warmed by the body before being expelled.  That is until the ambient temperature reaches or exceeds that of the body, at which time, it begins to increase heat stress.  Respiration plays a lesser role in humans than in other animals.  Dogs, for example, pant to increase respiratory heat loss; it is a larger contributor for them, relative to other mechanisms, than for humans.
     These are the primary control mechanisms that act in concert to regulate core body temperature.  These controls are activated automatically, often without our conscious awareness.  Other responses to heat stress require active engagement, such as monitoring physical and environmental conditions, adjusting clothing and equipment, and developing work-rest cycles and contingency plans.  These factors are relevant to the pursuit of heat balance.
 
Heat Balance
     Homeothermy requires a balance between the heat generated or absorbed by the body and that which is dissipated from the body.  In “perfect” equilibrium, the net heat gain is zero.  Zero heat gain implies that the body’s thermoregulatory response functions (i.e. heat strain) are sufficient to maintain a constant core temperature in the presence of heat stress.
     As mentioned in Part 1, heat stress and heat strain are quantifiable, typically presented in the form of a heat balance equation.  The form of heat balance equation used here is
     S = M + W + C + R + K + E + Resp,
where S is heat storage rate, M is metabolic rate, W is work rate, C is convective heat transfer (convection), R is radiative heat transfer (radiation), K is conductive heat transfer (conduction), E is evaporative cooling (evaporation), and Resp is heat transfer due to respiration.  Each value is positive (+) when the body gains thermal energy (“heat gain”) and negative (-) when thermal energy is dissipated to the surrounding environment (“heat loss”).  Each term can be expressed in any unit of energy or, if time is accounted for, power, but consistency must be maintained.  The following discussion provides some detail on each component of the heat balance equation in the context of hot environments.
 
     The “perfect” equilibrium mentioned above and, thus, constant core temperature is achieved when S = 0.  This situation is more hypothetical than realistic however.  Fluctuations of body temperature occur naturally and, within limits, are no cause for concern.  For example, a person’s core temperature varies according to his/her circadian or diurnal rhythm.  Despite a range of up to 3°F (1.7°C), these fluctuations go largely unnoticed.
     The average “normal” temperature, 98.6°F (37.0°C), is cited frequently.  Less common, however, is discussion of a range of “safe” temperatures.  Most people maintain normal physiological function in the 97 – 102°F (36.1 – 38.9°C) range of core temperature.  It is also worth noting that these values refer to oral temperature; rectal temperatures are usually ~ 1°F (0.6°C) higher.  While rectal temperature is a more accurate measure of core temperature, the limitations on its use in most settings should be obvious.
     Because the human body is sufficiently resilient to accommodate significant temperature fluctuations, S = 0 can be treated as a target averageHeat storage in the body (S) will vary as the body’s thermoregulatory control “decisions” are executed.  Heat storage can become dangerous when S > 0 for an extended period, trends upward, or becomes exceptionally high.
 
     The metabolic rate (M) is the rate at which the body generates heat, corresponding to work demands and oxygen consumption.  Precise measurements are typically limited to research settings; workplace assessments of heat stress typically use estimates or “representative values.”  Exhibit 2 provides a guide for selecting a representative metabolic rate for various scenarios.
     M is always positive (M > 0), representing thermal energy that must be dissipated in order to maintain a constant core temperature.  It may also be called the “heat of metabolism;” heat is generated by chemical reactions in the body, even in the absence of physical work.
     A number of factors can effect a person’s metabolic rate.  Several are presented, in brief, below:
  • The value of M when a person is at rest under normal conditions is called the basal metabolic rate (BMR).  It is the minimum rate of metabolic heat generation, primarily influenced by thyroid activity, upon which other influences build.
  • The size of a person’s body influences his/her BMR; “larger individuals have greater energy exchanges than smaller persons.” (Bennett, et al)  Though the effect on BMR has not been found to be proportional to any single measure of body size, it varies to the 2/3 or 3/4 power of a person’s weight.
     Related research suggests that only fat-free tissues of the body contribute to basal heat production.  This finding may encourage the use of body mass index (BMI) calculations, though they are notoriously unreliable.  More accurate methods of determining body fat content exist, but they are more difficult to execute.  Their use, therefore, is typically limited to in-depth research scenarios.
  • A person experiencing a period of growth requires additional energy.  Therefore, a young person has a higher BMR than a comparably-sized adult.
  • A person’s diet influences heat production through the specific dynamic action (SDA) of the food ingested.  SDA is the heat generated by digestion in excess of the energy value of the food.  Protein produces the highest SDA, while fat and carbohydrates have less impact.
  • The use of drugs – prescription or otherwise – may influence a person’s metabolism.  Some effects may be desired, improving a condition being treated, while others may be unintended and detrimental.
  • Physical activity, or “muscular exercise,” changes the energy requirements of the body; some energy is, inevitably, converted to heat.
  • A person’s core temperature also influences his/her metabolic rate; M increases ~7% per degree Fahrenheit (0.6°C) rise in core temperature.  This cyclical influence on the heat balance can contribute to a “runaway” thermal condition if not properly managed.

     The work rate (W) represents the portion of energy consumed in the performance of work that is not converted to heat.  Many formulations of the heat balance equation exclude this negative (heat-reducing) term, deeming it safe to ignore, as it is usually less than 10% of M.
 
     Heat dissipation via convection (C) begins with the circulatory system.  Heat from the body’s core and muscles is transferred to the skin, preventing “hot spots” that could damage organs or other tissue.  From the skin, heat is transferred to the surrounding air (C is negative), assuming the ambient temperature is lower than the skin temperature.  If the reverse is true, C becomes positive, adding to the body’s heat load.
 
     The radiation (R) term refers, specifically, to infrared radiation exchanged between the body and nearby solid objects.  The skin acts as a nearly-perfect black body; that is, it efficiently absorbs (positive R) and emits (negative R) infrared radiation.  Like convection, the sign (direction) of radiative heat transfer depends on the temperature of the skin relative to that of the surroundings.  A person’s complexion has no effect on infrared radiation or radiative heat transfer.
 
     Heat transfer by conduction (K) is not common in workplaces, as it requires direct contact with a solid object.  Where it does exist, it is often highly localized and transient, such as in the hands during manual manipulation of an object.  It is positive when touching a hot object and negative when touching a cold one.  Contact with objects made through clothing is considered “direct contact” for purposes of heat stress assessment.
 
     In this formulation of the heat balance equation, the evaporation (E) term captures the cooling effect of sweat evaporating from the skin.  In mild conditions, the amount of sweat produced may be imperceptible, but it is not insignificant.  This “insensible water loss” can approach 1 qt (0.9 L) per day, dissipating ~25% of basal heat production.  During strenuous physical activity, the body can produce more than 3.2 qt (3 L) of sweat in one hour.
     This component is often called evaporative cooling; as this term implies, E is always negative.  Several physical and environmental conditions place limitations on the capacity of evaporative cooling.  Proper hydration is necessary to sustain the high sweat rates that produce maximum cooling.  Clothing and protective gear may limit the interface area available or the efficiency of evaporation.
     Ambient conditions significantly impact the body’s ability to cool itself via evaporation.  As humidity increases, the rate of evaporative cooling decreases.  Increasing air speed enhances evaporation, though no additional benefit is gained at speeds above ~6.7 mph (3 m/s) or air temperature above 104°F (40°C).  When air temperature exceeds skin temperature, low humidity is needed for evaporation to compensate for convective heat gain to maintain a net heat loss.  In favorable conditions, evaporative cooling is the single greatest contributor to heat loss from the body.
 
     Heat loss due to respiration (Resp) may be difficult to quantify.  In many formulations of the heat balance equation, it is included in the evaporation (E) term, as the largest contribution comes from expelling water vapor.  There is also heating of the air while in the lungs, though it may be a relatively small heat transfer.
     Resp is usually negative, but could become positive in very high air temperatures.  Such conditions are not common in workplaces, as this type of environment is often deemed unsafe for various reasons.  In most cases, access to such an environment is restricted to individuals with protective gear, such as breathing apparatus, limited in duration, and closely monitored.
     Though the respiration component may be difficult to quantify, independent of other mechanisms of heat loss, inclusion of the Resp term in this discussion is useful for practical purposes.  Managing heat and fluid loss in a hot environment is aided by simply recognizing that respiration makes a contribution, even if its magnitude is unknown.  The direction of heat transfer is usually understood intuitively; this may be the only information needed for workers to take additional precautions to ensure their well-being.
 
     The body’s heat balance is pictorially represented as a mechanical balance scale in Exhibit 3.  It presents factors that increase and decrease core temperature as well as the “normal” range of variation throughout the day.  A visual reference can be a useful tool, as it is more intuitive than a written equation, promoting deeper understanding that aids practical application of information.

     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Human Factors in Technology.  Edward Bennett, James Degan, Joseph Spiegel (eds).  McGraw-Hill Book Company, Inc., 1963.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “Hypothalamus” in Encyclopedia of Neuroscience.  Qian Gao and Tamas Horvath.   Springer, Berlin, Heidelberg; 2009.
[Link] “Thermal Indices and Thermophysiological Modeling for Heat Stress.”  George Havenith and Dusan Fiala.  Comprehensive Physiology; January 2016.
[Link] “Occupational Heat Exposure. Part 1: The physiological consequences of heat exposure in the occupational environment.”  Darren Joubert and Graham Bates.  Occupational Health Southern Africa Journal; September/October 2007.
[Link] “NIOSH Criteria for a Recommended Standard Occupational Exposure to Heat and Hot Environments.”  Brenda Jacklitsch, et al.  National Institute for Occupational Safety and Health (Publication 2016-106); February 2016.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 1:  An Introduction to Biometeorology and Job Design]]>Wed, 17 May 2023 05:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-1-an-introduction-to-biometeorology-and-job-design     In the minds of many readers, the term “thermal environment” may induce images of a desert, the Arctic, or other thoughts of extreme conditions.  While extreme conditions require intense planning and preparation, they merely bookend the range of work conditions that require consideration.  That is to say that the environmental conditions of all workplaces should be thoroughly assessed and the impacts on the people within them properly addressed.
     The ensuing discussion is generalized to be applicable to a wide range of activities.  The information presented in this series is intended to be universally applicable in manufacturing and service industries.  Additional guidance may be available from other sources; readers should consult industry- or activity-specific organizations for detailed information on best practices and regulations that are beyond the scope of this series.
Terms in Use
     Most of the terms used in this series are in common use, though their application to the subject at hand may be unfamiliar to some readers.  The usage of some of these terms is presented here to facilitate comprehension of information throughout the series.
     It seems logical to begin with the title of the series:  “Thermal Work Environments.”  This term was chosen to limit the scope of discussion to environmental conditions found in workplaces, differentiating them from military operations, athletics, and leisure activities.  The information provided remains valid in these contexts, but the objectives and decision-making, not to mention the clothing requirements, differ sufficiently to warrant explicit exclusion from the discussion of work environments.  Environmental considerations in these settings will be addressed, briefly, however, as a related topic.
     “Thermal,” as used here, has multiple connotations.  For one, it refers to the homeothermic nature of human beings.  The human body attempts to maintain a constant core temperature irrespective of its surroundings; homeo ≈ same, therm ≈ temperature.  With regard to surroundings, it is an umbrella term that encompasses several variables that influence a person’s perception of temperature and assessment of comfort.  These include the actual (air) temperature, humidity, air movement (e.g. wind), sunlight, and other sources of radiation.
     Comfort, as referenced above, is the subjective, individual perception of conditions.  Only thermal comfort will be considered here.  There are important distinctions between comfort, stress, and strainStress and strain can be related to either high or low temperatures:
  • stress – the net effect (i.e. heat load) of metabolic heat generation, clothing, and environmental conditions to which an individual is subject.
  • strain – the physiological response to stress; i.e. changes in the body, made automatically, to retain (cold strain) or dissipate (heat strain) thermal energy.
Stress and strain are quantifiable phenomena, whereas comfort is a qualitative judgment of thermal stress or its absence.
     Even “heat” and “cold” warrant explicit mention, as their use blurs vernacular and technical meaning.  Technically speaking, heat is thermal energy; cold has no technical definition.  An attempt at rigid adherence to technical terminology in this discussion would be futile and counterproductive.  Conventional (i.e. vernacular) use of the terms suffice:
  • heat – high or excess thermal energy; cooling desired.
  • cold – low or insufficient thermal energy; warming desired.
A schematic representation of the continuum of thermal stress that spans these terms is provided in Exhibit 1.
     Coming full circle, we return to the title of this installment.  All of the terms discussed thus far are used in reference to biometeorology – the study of the effects of atmospheric conditions, such as temperature and humidity, be they naturally-occurring or artificially generated, on living organisms.  Our interest, of course, is in human biometeorology and how it influences job design.
     Job design defines a variety of elements of a person’s workplace experience. These may include the physical layout of a workstation or entire facility, equipment used, policies and procedures to be followed, and the schedule according to which tasks are performed.  All aspects of how and when work is performed are part of its job design.
     Understanding how the terms presented here are used is necessary to comprehend this series as a whole.  Other terms are introduced throughout the series in the context of relevant discussions.
 
Structure of the Series
     The “Thermal Work Environments” series is presented in several parts, in three loosely-defined “sections.”  The first section discusses hot environments, including the physiological effects on people working in elevated temperatures.  Measurements and calculations used to define and compare environmental conditions – specifically, the heat stress caused – are also presented.  Finally, recommendations are provided to assist those designing and performing tasks in minimizing the detrimental effects of heat stress.
     The second section discusses cold environments.  The presentation of information mirrors that of hot environments in the first section.  The final section is comprised of discussions of related topics that, while useful, could not be included seamlessly in the first two sections.
     The series structure described was derived with the following objectives:
  • Limit the length and scope of each installment so that they are easy to consume and comprehend.
  • Simplify future references to relevant material by making it easy to locate within the series’ installments.
  • Promote a holistic approach to job design in environments subject to seasonal (or similar) variations by facilitating side-by-side comparison of considerations relevant to hot and cold environments.
  • Simplify expansion or modification of the series to maintain its utility as knowledge and practices evolve.
     There is a series directory at the end of this post.  Links will be added as new installments are published; returning to this post provides quick and easy access to the entire series.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] “A glossary for biometeorology.”  Simon N. Gosling, et al.  International Journal of Biometeorology; 2013.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
 
Directory of “Thermal Work Environments” entries on “The Third Degree.”
Part 1:  An Introduction to Biometeorology and Job Design (17May2023)
Part 2:  Thermoregulation in Hot Environments (31May2023)
Part 3:  Heat Illness and Other Heat-Related Effects (14Jun2023)
Part 4: A Measure of Comfort in Hot Environments (28Jun2023)
Part 5:  Managing Conditions in Hot Environments (12Jul2023)
Part 6:  Thermoregulation in Cold Environments (18Oct2023)
Part 7:  Cold Injury and Other Cold-Related Effects (1Nov2023)
Part 8:  A Measure of Comfort in Cold Environments (15Nov2023)
Part 9:  Managing Conditions in Cold Environments (29Nov2023)
]]>
<![CDATA[Toxicity]]>Wed, 03 May 2023 05:00:00 GMThttp://jaywinksolutions.com/thethirddegree/toxicity     A toxic culture can precipitate a wide range of deleterious effects on an organization and individual members.  The toxicity of an organization becomes obvious when overt behaviors demonstrate blatant disregard for social and professional norms.  These organizations often become fodder for nightly “news” broadcasts as they are subject to boycotts, civil litigation, and criminal prosecution.
     An organization’s toxicity can also manifest much less explicitly.  Subtle behaviors and surreptitious actions are more difficult to detect or to evince intent.  It is this uncertainty that allows toxic cultures to persist, to refine and more-effectively disguise maladaptive behaviors.
     To combat organizational toxicity, leaders must appreciate the importance of a healthy culture, recognize the ingredients of toxic culture, and understand how to implement effective countermeasures.
What It Is and Why It Matters
     “Culture,” in general, and “corporate culture,” specifically, can be defined in myriad ways.  For simplicity and convenience, we will rely on our constant companion, dictionary.com, for ours:  “the values, typical practices, and goals of a business or other organization, especially a large corporation” (def. 7).  Each of the components of this definition – values, practices, and goals – contribute extensively to the culture created within an organization.
     An organization’s values are the ideals pursued, as matters of course, during normal operations or, perhaps more accurately, those espoused by the organization’s leadership.  These often include things like diversity, community involvement, environmental protection, innovation, personal development, cultural sensitivity, and a host of other genuine interests and platitudes.
     An organization’s goals should be derived directly from its values.  Financial goals are obvious requisites, but environmental, personnel development, and community project goals may also be established.  Some goals may be publicly announced, while others are only discussed internally.
     The practices in which an organization engages – or tolerates – demonstrate the extent to which its values are honored while pursuing its goalsPractices that are not aligned with stated values, whether organizational or individual in nature, are sources of toxicity.
     “Toxicity” is a generalized term used to describe any aberrant behavior, environmental condition, or negative affect that undermines team cohesiveness, effective decision-making, individual performance, or well-being.  A “toxic workplace culture” is one in which toxicity is encountered with regularity by one or more individual or group.  An important and far too-prevalent example of toxic culture is discussed in “Managerial Schizophrenia and Workplace Cancel Culture” (9Mar2022).
 
     The individual members of an organization are like the cells of a living organism.  Poor functioning or loss of one cell may be easily overcome; however, as the number of poisoned cells increases, functioning of the entire organism degrades.  Likewise, as toxic culture spreads within an organization, its success and survival are jeopardized.
     Deleterious effects of toxic culture exhibit a compounding nature.  It progresses from individuals to those around them and then to larger and larger groups.  Without effective intervention, the spread of toxic culture and its consequences to the entire organization is inevitable.  It may be tempting to call it a domino effect, but it is much more complex than that.  The spread of toxic culture is not as linear or predictable as falling dominoes.
     This progression is now discussed in brief; a thorough exploration of all possible paths and consequences of toxic culture development is beyond the scope of this presentation.  It should suffice, however, to convince readers that workplace culture is worthy of rigorous scrutiny and course correction.
 
     The first recognizable symptom of a toxic workplace culture is often an individual’s increasing stress level.  This does not include the stress induced by a challenging project or looming deadline (assuming these were appropriately assigned); these are often considered forms of “good stress” that motivate and inspire people to do their best work.  Instead, this refers to “bad stress” – that which is unnecessary and undeserved.  Stress and dissatisfaction tend to increase, causing additional problems for the individual, such as self-doubt, burnout, and other mental health concerns.  Left unchecked, stress can also lead to physical illness as serious as heart disease or other chronic disorder.
     The effects on an individual impact coworkers in two key ways.  First, relationships may be strained, as individuals’ responses to elevated stress are often unhealthy interpersonally.  Second, the coworkers’ workload often increases as a result of the individual’s reduced productivity and increasing absenteeism.  Any project team, department, or committee of which the individual is a member is, thus, less effective.  This can create a spiral where one team member “drops out,” raising the stress levels of others, who eventually succumb to its ill effects.
     Weakening financial performance is a common downstream effect of toxicity.  However, the influence of an organization’s culture on its financial performance is often recognized only post mortem; that is, after a business has collapsed or is in crisis.  In most cases, there are plenty of signs – big, flashing, neon signs – that are simply ignored until irreparable damage has been done.
     Reduced productivity, engagement, and innovation are clear signals that trouble is brewing.  Rising healthcare costs, absenteeism, and attrition also provide reliable warnings.  Difficulty recruiting new employees can also be a sign that those outside the organization recognize a problem, even if those inside it are in denial.
     As “good” people depart, those left behind are stressed by an increasing concentration of toxicity, accelerating the organization’s demise.  This can be brought about through financial collapse or accelerated by noncompliance and corruption.  Once civil litigation and criminal prosecution of officers begins, survival of the organization is uncertain at best.
     An organization’s culture generates various cycles of behavior.  These can be virtuous cycles that reinforce positive behaviors and support long-term goals or vicious cycles that drive away ethical, high-performing team players.  Every behavior is endorsed, either explicitly or implicitly, or interrupted; the choice is made by leaders throughout an organization during every cycle.  Defeating vicious cycles requires consistent interruption with demonstrations of proper behavior that begin new virtuous cycles.
 
Characteristics of Toxic Culture
     Researchers at CultureX have identified five characteristics of “corporate” culture that push an organization beyond annoying or frustrating to truly toxic.  The “Toxic Five” are:  disrespectful, noninclusive, unethical, cutthroat, and abusive.
     A disrespectful environment exerts a strong negative influence on employee ratings of their workplace.  A somewhat generic term, disrespect includes any type of persistent incivility and may overlap other characteristics of toxic culture.  Being dismissive of one’s ideas or inputs without proper consideration is a common form of disrespect experienced in toxic cultures.
     Noninclusive workplaces are those in which employees are differentially valued according to traits unrelated to any measure of merit.  Demographic factors relevant to noninclusive cultures include race, gender, sexual orientation, age, and disability.  Any type of discrimination or harassment based on these traits is evidence of a noninclusive culture.
     Pervasive cronyism, where “connections” afford special privileges, is also indicative of a noninclusive culture.  “General noninclusive culture” refers to sociological in-group and out-group behavior; at its extreme, one or more colleagues may be ostracized by a larger or more-entrenched group.  Cliques are not just for high school anymore!
     Unethical behavior can also take many forms; it may be directed at peers, subordinates, superiors, customers, suppliers, or any stakeholder that could be named.  It could involve the use or disclosure of employees’ personal information, falsifying regulatory, financial, or other documentation, misleading or intentionally misdirecting subordinates or managers, or myriad other inappropriate actions or omissions.  Ethics is a broad topic, a proper exploration of which is beyond the scope of this presentation.
     A cutthroat environment is one in which coworkers actively compete amongst themselves.  This type of culture discourages cooperation and collaboration; instead, employees are incentivized to undermine one another.  In extreme cases, sabotage, by physical or reputational means, may be committed to maintain a favorable position relative to a coworker.  Workplace Cancel Culture thrives in cutthroat environments.
     An abusive culture refers, specifically, to the behavior of supervisors and managers.  Supervisors may be physically or verbally aggressive, but abusive behavior is often more subtle.  Publicly shaming an employee for a mistake, absence, or other “offense,” as well as individually or collectively disparaging team members are clear signs of abusive management that are often ignored.  Abusive behavior must be differentiated from appropriate reprimands, respectfully and professionally delivered, and other disciplinary actions required of effective management.
     The Toxic Five provide a framework for understanding the conditions in which toxic cultures develop and persist.  What is now needed is an effective method of culture-building that prevents toxicity from spreading and provides an antidote for isolated cases that develop.
 
Models of Culture Development
     Various cultural frameworks have been developed; some by prominent academics or intellectuals, others by famous managers, and still others in relative obscurity.  The best model for any organization may be a hybrid of existing frameworks or a new approach that exploits its unique character.  A small sample of existing models is presented here for inspiration.

The Three-Legged Stool.  A rather simple model, the three-legged stool approach suggests that cultural development relies on resources, training, and accountability.  Each leg is fundamental to a healthy culture and easy to understand.
     Without the required resources, employees are unable to perform as expected, causing stress and dissatisfaction.  This begins the progression of deleterious effects discussed previously.  Unwillingness to provide necessary resources indicates an environment that is disrespectful to team members and may also be related to unethical or abusive behavior.
     Training provides the know-how that employees need to succeed.  It should consist of more than the technical aspects of a job, including appropriate responses to exposure to toxicity.  Team members must understand the behaviors required of virtuous cycles to effectively interrupt and replace vicious cycles.
     Every member of an organization must be held accountable for his/her actions and influence on workplace culture.  All individual contributors, managers, and executives must be held to the same standard for a healthy culture to endure.

Four Enabling Conditions.  The four enabling conditions were originally published as keys to effective teamwork.  Teamwork and culture are so intricately interwoven that considering these conditions as enablers of healthy culture is also valid.  They are:  compelling direction, strong structure, supportive context, and shared mindset.
     To be effective, an organization requires a “compelling direction.”  This can usually be discerned from stated values and goals that define where the organization intends to go and what paths to that destination are and are not acceptable.
     A “strong structure” enables the highest performance of which an organization is capable.  In this context, structure refers to membership in a group and the apportionment of responsibility within it.  Diverse backgrounds and competencies create a versatile team.  Careful consideration of workflows and assignment of responsibility supports efficient achievement of objectives.  A versatile, efficient organization can be said to have a strong structure.
     The “supportive context” needed to maintain a healthy culture is closely related to the resources of the Three-Legged Stool model.  It also incorporates an incentive structure that encourages cooperation and collaborative pursuit of objectives.
     It is particularly important – and difficult – to establish a “shared mindset” within a geographically dispersed organization.  Consistently “lived” values are critical to maintaining a shared mindset; all members must receive the same messages, treatment, and resources for it to survive.
     The four enabling conditions are prerequisites to a healthy culture, but they do not guarantee it.  One must never forget that teams are comprised of humans, with all the idiosyncrasies and perplexities that render team dynamics as much art as science.  For this reason, maintaining a healthy culture requires vigilance and dedication.

Three Critical Drivers.  In addition to the Toxic Five, the team at CultureX have identified the three “most powerful predictors of toxic behavior in the workplace.”  Slight modification of the terminology and viewpoint yields the “critical drivers” of culture:  leadership, social norms, and work design.  Implicit in the term is that these drivers can lead to healthy culture, if well-executed, or toxicity, if poorly executed.
     It is likely no surprise that leadership is consistently found to be the strongest driver of culture, be it in a positive or negative direction.  Leaders set expectations, whether consciously or inadvertently; team members mirror leaders’ behaviors, as they are understood to be “the standard.”
     Leaders throughout an organization provide examples of behavior for those around them.  In dispersed groups, this can lead to the development of “microcultures” that differ from other locations or the “corporate standard.”  The existence of a microculture can be beneficial, neutral, or unfavorable.  A toxic microculture is sometimes called a “pocket of toxicity.”  Once discovered, a pocket of toxicity must be contained and corrected to protect the entire organization.
     Behaviors are deemed acceptable when they are aligned with an organization’s social norms.  Norms are context-sensitive; what is appropriate in one setting may be unacceptable in another.  A leader’s behavior often establishes social norms, but a cohesive team can define its own that negate some toxicity that would otherwise infiltrate the group.
     Elements of work design can be modified to reduce employees’ stress and increase productivity and satisfaction.  Eliminating “nuisance work” from a person’s responsibilities is a clear winner, but is not as straightforward as it might first appear.  A job cannot always be customized to an individual; it must meet the needs of the organization regardless of who is performing it.  One person might be tortured by “paperwork,” while another is annoyed by the need to keep physical assets organized.  If every task that could be distasteful to any employee were removed, no work would get done!
     Instead, focus on eliminating “busy work” or nonvalue-added activities.  Employees are more likely to remain engaged when performing tasks they do not enjoy if they understand the value of the work.  Allowing flexibility in task performance or incorporating their input in the work design also increases engagement and satisfaction.
     While flexibility is desirable in task performance, clarity and consistency is necessary when it comes to roles and responsibilities.  Obviously, an individual needs to know the requirements of his/her own job, but understanding the roles of others is also important.  If support is needed, or a problem is discovered, each team member must know to whom it should be reported.  Ambiguous reporting structures, with intersecting hierarchies, “dotted-line” reporting relationships, and multiple “bosses” make it difficult for anyone to be confident in the correct course of action that will both achieve the desired outcome and satisfy reporting expectations.
 
     There is significant overlap in the models presented, though attention was drawn to little of it.  The remainder is left to the reader to recognize and implement in the fashion that best suits his/her circumstances.
 
Final Thoughts
     The preceding discussion focused on the spread of toxicity within an organization.  It is worth noting, however, that the deleterious effects of toxic culture, in many cases, are not confined to a single organization.  Toxic behaviors are often reciprocated or contagious, allowing the spread of toxicity to an organization’s supply chain, customers, and local or global community.  Every stakeholder is susceptible to the effects of toxicity that is allowed to permeate an organization.  Leaders’ diligence in maintaining a healthy culture protects every member of the organization and those with whom they interact.
 

     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] “Why Every Leader Needs to Worry About Toxic Culture.”  Donald Sull, Charles Sull, William Cipolli, and Caio Brighenti.  MIT Sloan Management Review; March 16, 2022.
[Link] “How to Fix a Toxic Culture.”  Donald Sull and Charles Sull.  MIT Sloan Management Review; September 28, 2022.
[Link] “The Secrets of Great Teamwork.”  Martine Haas and Mark Mortensen.  Harvard Business Review; June 2016.
[Link] “A Leg Up.”  Gary S. Netherton.  Quality Progress; November 2020.
[Link] “Does your company suffer from broken culture syndrome?”  Douglas Ready.  MIT Sloan Management Review; January 10, 2022.
[Link] “5 Unspoken Rules That Lead to a Toxic Culture.”  Scott Mautz.  Inc.; June 6, 2018.
[Link] “Stop These 4 Toxic Behaviors Before Your Employees Quit.”  Scott Mautz.  Inc.; September 28, 2016.
[Link] “Why You’re Struggling to Improve Company Culture.”  Dan Markovitz.  IndustryWeek; December 5, 2017.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Commercial Cartography – Vol. XI:  Materiality Matrix]]>Wed, 19 Apr 2023 04:30:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-xi-materiality-matrix     In common language, “materiality” could be replaced with “importance” or “relevance.”  In a business setting, however, the word has greater significance; no adequate substitute is available.  In this context, materiality is not a binary characteristic, or even a one-dimensional spectrum; instead it lies in a two-dimensional array.
     Materiality has been defined in a multitude of ways by numerous organizations.  Though these organizations have developed their definitions independently, to serve their own purposes, there is a great deal of overlap in both.  Perhaps the simplest and, therefore, most broadly-applicable description of materiality was provided by the GHG Protocol:
“Information is considered to be material if, by its inclusion or exclusion, it can be seen to influence any decisions or actions taken by users of it.”
     Recognizing the proliferation and potential risk of divergent definitions, several organizations that develop corporate reporting standards and assessments published a consensus definition in 2016:
“Material information is any information which is reasonably capable of making a difference to the conclusions reasonable stakeholders may draw when reviewing the related information.” (IIRC, GRI, SASB, CDP, CDSB, FASB, IASB/IFRS, ISO)
     The consensus definition is still somewhat cryptic, only alluding to the reason for its existence – corporate financial and ESG (Environmental, Social, Governance) reporting.  As much can be surmised from the list of signatory organizations as from the definition itself.
     A materiality matrix is a pictorial presentation of the assessments of topics on two dimensions or criteria.  It can be presented as a 2 x 2 matrix, such as that in Exhibit 1; slightly increased granularity is provided by a 3 x 3 matrix, as shown in Exhibit 2.  Granularity at its extreme results in a conventional two-dimensional graph, such as that in Exhibit 3.
     As seen in this set of examples, axis titles can vary.  The choices made may be dependent upon the company’s common language, the purpose of the assessment, or type of report to be prepared.  For simplicity and consistency, the following convention will be followed throughout the following presentation:
            Horizontal (“X”) axis – Impact on Business
            Vertical (“Y”) axis – Importance to Stakeholders.
This phraseology is equally applicable to financial and ESG reporting, simplifying implementation.
 
Types of Materiality
     The generic definitions of materiality presented in the introduction refer to “single materiality.”  The two types of single materiality, commonly labeled “financial” and “impact” (ESG or sustainability) materiality, have already been mentioned.
     “Double materiality” refers to information relevant to both financial and impact materiality reports.  The degree of materiality may differ between types for a particular topic, and often does.  Nonetheless, if information is deemed material in both contexts, it is said to exhibit double materiality.
     Different users of reported information may require varying levels of detail to apply it appropriately.  This situation has been dubbed “nested materiality.”
     “Core materiality” has been introduced as an umbrella term for three common material matters – greenhouse gas emissions, labor practices, and business ethics.  The term was coined as a reflection of the nearly universal materiality of these topics across varied industries.  Each represents one component of ESG – environmental (GHG), social (labor), and governance (ethics).
     “Extended materiality” considers impacts on portions of the value chain outside the assessor’s control.  Understanding upstream (i.e. supply chain) and downstream (i.e. marketplace) impacts better informs one’s own materiality assessments.
     “Dynamic materiality” is the term used to describe the changeable nature of materiality over time.  It is the reason that materiality assessments should be repeated periodically.  Materiality is in flux; previous assessments may no longer be valid.
 
Materiality Assessment
     Conducting a materiality assessment is a structured exercise involving a variety of people throughout an organization.  Descriptions of the process vary, but there is a high degree of agreement about the content and purpose; a seven-step process is presented here.  The steps are:  Prepare, Brainstorm, Categorize, Assess, Plot, Validate, and Publish.  A description of each follows.
 
Prepare.  Preparation for a materiality assessment is similar in many ways to that for various other types of projects.  Much of the effort required involves defining the assessment to be conducted.  Definition includes, but need not be limited to:
  • purpose of assessment – e.g. annual report to shareholders, strategy session, etc.
  • scope of assessment – financial, ESG, or both
  • assessment boundaries – e.g. core or extended materiality
  • team members, roles, and responsibilities
  • stakeholders – internal (directors, employees, unions, etc.) and external (suppliers, customers, investors, neighbors, advocacy groups, etc.)
  • process to be used, including decision-making guidelines
  • threshold values and other limits to materiality (e.g. social, environmental, economic impacts)
  • axis scales and titles and format of resulting matrix.
 
Brainstorm.  This step can be conducted in a brainstorming session as typically described, or less literally, as a period of information gathering.  All available sources of information should be considered to compile a list of potentially material topics.  This can include existing mechanisms of communication, such as a website inquiries, sales and marketing interactions, shareholder calls, customer service call centers, or other established channels.  Surveys, questionnaires, or other information-gathering techniques can also be used specifically to support a materiality assessment.  The methods and tools used should be identified in the assessment process definition.
     Topics that trends suggest may be material in the future should also be captured, even if they do not currently meet the criteria.  Doing so will facilitate future assessments, as previous assessments serve as a key source of potential topics.  Inexperienced members of an assessment team may be unsure where to look for potential materiality.  Fortunately, there are tools available to assist in this research, including:These resources are only aids to the materiality search; their suggestions should not be interpreted as universal or comprehensive.
 
Categorize.  Arrange potential topics in groups that facilitate further research and assessment.  Groups could reflect the department, region, or other division to which each topic is most relevant.  If a different set of categories is a better fit with the team and organization structure, define the preferred classification scheme in the assessment process to ensure all team members view the information through the same lens.  See Vol. VII:  Affinity Diagram (8Feb2023) for additional guidance.
 
Assess.  Evaluate each topic on the dimensions of Impact to Business and Importance to Stakeholders according to the scales and scope defined during preparation.  Conduct additional research, if required, to quantify the impacts as accurately as possible.  It is imperative that each topic be assessed consistently in order for the materiality matrix to accurately represent the state of the business.
     The International Integrated Reporting Council (IIRC) guidelines for conducting an assessment include several perspectives from which each topic should be viewed and other factors that may influence the magnitude of impacts.  Assessment teams are advised to consider both quantitative and qualitative factors.  Quantitative factors may be direct measures of financial impact, but could also be represented by percentage changes in sales, yield, or other performance metric.  Qualitative factors are those that “affect the organization’s social and legal licence [sic] to operate,” including reputation and public perception.  These can be effected by the discovery of fraud, excessive pollution, workplace fatalities or illness, or other violations of “social contracts.”
     Both the area and time frame of impacts should be considered.  The area of an impact refers to it being internal or external to the organization.  Internal impacts include matters involving the continuity of operations and other disruptions that effect the organization directly.  External impacts include matters that effect stakeholders who then exert pressure on the organization in various ways.  These may include reputational damage, higher cost of capital, or the availability of required resources.
     An impact may have a short-, medium-, or long-term effect on an organization.  Short-term impacts are immediate and usually recoverable, such as an accident or spill.  The definitions of medium- and long-term vary among industries, but “average” or typical time frames used are 3 – 5 years and 5+ years, respectively.
     Medium-term impacts may include resource depletion, contract or license expiration, or other foreseeable change in an organization’s operating environment.  Long-term impacts are often associated with technology development and related regulations, such as renewable energy, electrification of transportation, and artificial intelligence.  The longer the time horizon, the more difficult it is to predict the nature and magnitude of the impact that will be experienced and, therefore, stakeholders’ perceptions.
     The IIRC’s recommended perspectives are:
  • Financial – expressed in monetary terms or financial ratios, such as liquidity or gross margin.
  • Operational – production volume and yield, market share and customer retention, etc.
  • Strategic – “high-level aspirations” such as market leadership, impeccable safety performance record, product development plans, etc.
  • Reputational – evaluation of incidents and the organization’s responses to them:  Were the events foreseeable, preventable?  Were the events caused by negligence or incompetence?  Were the responses timely, appropriate, and effective?  Was the organization forthcoming and transparent regarding causes, responsibility, and recovery plans?
  • Regulatory – an organization’s record of compliance and ability to comply with foreseeable future regulations.
     Viewing an organization from each of the perspectives described reveals potential impacts with financial and non-financial, or direct and indirect, effects.  There is significant overlap in the perspectives, particularly financial, that can be beneficial to an assessment.  Exploration from one perspective can quite naturally lead to taking another perspective; a more thorough and well-reasoned assessment often results.  The table in Exhibit 4 presents an example summary of the factors effecting mine safety.
     Compare each topic’s assessment to defined threshold values to determine which will be included in the materiality matrix and related reports.  If thresholds have been established by Enterprise Risk Management, they should also be applied to the materiality assessment.
     Various thresholds can be defined, both financial and non-financial.  The IIRC references several:
  • Monetary amount” – Carbon Collective suggests income thresholds equal to 5% of pre-tax profit or 0.5% of revenue.  Alternatively, thresholds equal to 1% of total equity or 0.5% of total assets are suggested.  Appropriate monetary thresholds will vary from one organization to another, based on an array of factors.
  • Operational effect” – lost or interrupted production; e.g. 5% of planned volume cannot be delivered on schedule.
  • Strategic – effect on organization’s ability to follow strategic plan; e.g. project schedules delayed more than 60 days.
  • Regulatory – the point at which compliance costs exceed the organization’s ability or willingness to continue operations; e.g. 15% increase over current year.
  • Reputational – could be represented by several measures, such as Customer Satisfaction Index, Net Promoter Score, investor confidence surveys, social media research, etc.; e.g. 20% negative response on primary social media platform.
     Though included in the Assess step of the process for presentation in the context of their application, selection of thresholds requires significant attention during the Prepare step as the assessment process is defined.  The selection of team members and materiality thresholds are intricately linked.  A wide range of experience is needed on the team to select appropriate thresholds as well as evaluate topics with respect to several factors that determine if the thresholds have been exceeded.
 
Plot.  Create a pictorial record of the assessment in the format(s) defined during preparation.  The number of material topics, range of impacts, documentation standards, and other organizational norms may influence the format of the materiality matrix.  In the example shown in Exhibit 5, UPS differentiates between impact areas and trends that have significant influence on the business and, therefore, must be watched closely.  As shown in Exhibit 6, Unilever has chosen to identify five categories that cover a range of ESG topics.  Note that both present assessment results on a qualitative scale only; businesses are loath to publicly divulge financial information, lest competitors gain an advantage.  Unilever has even excluded items of “low” materiality – those that did not exceed a defined threshold – from the matrix.
     Multiple matrices can be created for different purposes, such as one for a financial report, one for a sustainability report, and a composite matrix for a shareholder report.  Exhibit 7 provides an example of separate matrices for reporting and strategy decisions.  Combining them results in the composite matrix shown in Exhibit 8.  If each marker on the graph were identified by a label, as is the “GHG Emissions” example, the matrix would be cluttered and difficult to read.
     To prevent visual overload, an alternative format, such as that shown in Exhibit 9, could be used.  In this example, each topic is identified by a number and its context – financial or sustainability – by the color of the marker.  An expanded legend or accompanying table (not shown) identifies each topic.  The information contained in an expanded legend, most often, is a short name (e.g. “GHG Emissions” in the previous example), while an accompanying document can contain detailed descriptions of the company’s investments, strategy, and other plans.  A public presentation is likely to contain the former, while the latter would be prepared for a meeting of executives or directors.
Validate.  Engage both internal and external stakeholders to assess the validity of the materiality matrix.  This step can be as simple as an informal “gut check,” where stakeholders opine on the absolute and relative ratings of material topics, accepting the matrix if it “feels right.”
     More-sophisticated evaluations involve comparisons of the assessment team’s ratings with those based on independently-acquired data.  The team may be challenged to defend its ratings by presenting supporting data and demonstrating the assessment process.  The objective of such challenges is to evaluate and confirm the strength of evidence and, thus, justify a topic’s position in the matrix.
     When additional data and critical review require it, adjustments are made to ratings and the matrix is updated.  Upon completion of the review and validation to stakeholders’ satisfaction, the materiality matrix, accompanying document, and report are finalized.
 
Publish.  The level of detail included in published reports varies, depending on the intended audience.  As mentioned previously, public disclosures may be limited to an overview, while internal management documents contain far more information, in both breadth and depth.  Typical components of a materiality assessment report include:
  • description of the assessment process and decision-making rules
  • results of the assessment (i.e. the matrix)
  • discussion of priorities and plans derived from or influenced by the assessment
  • the “shelf life” of the assessment (i.e. when it will be repeated).
Any detail relevant to the definition of the assessment, created in the preparation step, is a candidate for inclusion in a final report.  Information that clarifies the logical path to the ratings and conclusions should be included, while extraneous, distracting, or confusing details can be omitted.  Reactions to the report inform the team when its valuations of information are misaligned with those of various audience segments.
 
Materiality and Strategy
     “Strategy” is a very broad term, often clarified by a modifier such as “operations,” “marketing,” or “investment.”  Each of these, and more, can be influenced by a materiality assessment.  Examples of decisions that may be effected include:
Operations
  • Facility location, size, and construction
  • Modes of transportation
Marketing
  • Messaging compatible with expressed interests of consumers
  • Product development to satisfy changing preferences
Investment
  • Transitioning from disfavored industries, funds, etc. to those stakeholders deem worthy
  • Accelerating adoption of “green” technologies
     Strategy development, broadly speaking, consists of three key components – the Three Is:  impact, importance, and influence.  The first two, impact and importance, are addressed by the materiality assessment and can be read directly from the materiality matrix.
     The third component, influence, refers to an organization’s ability to effect the impact of a material topic on its stakeholders.  If an organization lacks capability, its strategy may focus on risk management, development of capabilities needed to reduce the impact, technology advancement that modifies the materiality landscape, or other method of compensation.
     The key takeaway is this:  even a high-impact topic that is highly salient to stakeholders (i.e. upper right of the materiality matrix) may not be a high priority; the lack of influence simply renders effort futile.  Presenting this accurately and transparently is crucial to maintaining stakeholder trust and support.
     Another facet of influence is an organization’s ability to effect stakeholders’ perceptions of a topic.  If stakeholders’ assessments of materiality are based on faulty research, corrupted data, etc., correcting the record is perfectly noble.  However, the potential for nefarious use of influence also exists.  For example, ethically-challenged individuals may choose to inappropriately downplay a material topic, mislead stakeholders, or divert attention from a management failure.  Even when used righteously, the practice may be considered manipulative, fostering skepticism and resentment.  It is mentioned here because the presentation would be remiss without it; its inclusion is intended to serve as a strong warning.  Influence should be used in this way rarely and with extreme caution.
 

     The unassuming appearance of a materiality matrix belies the intensity of research and analysis required to make it useful.  It also understates its utility as a strategy-development and communication tool.  The uninitiated may pay it little attention, but for those who see Superman in a newsroom, the insight it can provide is enormous.
           
            For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I: An Introduction to Business Mapping (25Sep2019).
 
References
[Link] “How to make your materiality assessment worth the effort.”  Mia Overall.  Greenbiz; August 15, 2017.
[Link] “The Strategic Value of ESG Materiality Assessments.”  Conservice ESG.
[Link] “Materiality Assessments in 4 Simple Steps.”  Jason Dea.  Intelex; September 2, 2015.
[Link] Materiality Tracker.
[Link] “Sustainability Materiality Matrices Explained.”  NYU Stern Center for Sustainable Business; May 2019.
[Link] “Practitioners' Guide to Embedding Sustainability.”  Chisara Ehiemere and Tensie Whelan.  NYU Stern Center for Sustainable Business; March 13, 2023.
[Link] “The essentials of materiality assessment.”  KPMG International, 2014.
[Link] “Dynamic, Nested and Core materialities - Materiality Madness?”  Madhavan Nampoothiri.  Nord ESG; July 25, 2022.
[Link] “From 0 to Double – How to conduct a Double Materiality Assessment.”  Sebastian Dürr.  Nord ESG; August 2, 2022.
[Link] “Dynamic Materiality: Measuring What Matters.”  Thomas Kuh, Andre Shepley, Greg Bala, and Michael Flowers.  Truvalue Labs, 2019.
[Link] “The materiality madness: why definitions matter.”  Global reporting Initiative; February 22, 2022.
[Link] “Embracing the New Age of Materiality: Harnessing the Pace of Change in ESG.”  Maha Eltobgy and Katherine Brown.  World Economic Forum; March 2020.
[Link] “Materiality analysis and its importance in CSR reporting.”  Altan Dayankac.  DQS Global; November 3, 2022.
[Link] “Materiality Concept.”  Brooke Tomasetti.  Carbon collective; March 8, 2023.
[Link] “Materiality:  Background Paper for Integrated Reporting.”  International Integrated Reporting Council; March 2013.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Commercial Cartography – Vol. X:  Work Balance Chart]]>Wed, 05 Apr 2023 04:30:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-x-work-balance-chart     The work balance chart is a critical component of a line balancing effort.  It is both the graphical representation of the allocation of task time among operators, equipment, and transfers in a manufacturing or service process and a tool used to achieve an equal distribution.
     Like other tools discussed in “The Third Degree,” a work balance chart may be referenced by other names in the myriad resources available.  It is often called an operator balance chart, a valid moniker if only manual tasks are considered.  It is also known as a Yamazumi Board.  “Yamazumi” is Japanese for “stack up;” this term immediately makes sense when an example chart is seen, but requires an explanation to every non-Japanese speaker one encounters.  Throughout the following presentation, “work balance chart,” or “WBC,” is used to refer to this tool and visual aid.  This term is the most intuitive and characterizes the tool’s versatility in analyzing various forms of “work.”
     A work balance chart can be used to streamline manufacturing or service operations.  Any repetitive process that involves manual operations, automation, and transfers of work products between operators or equipment can benefit from work balance analysis.  Applications in manufacturing are most common; service providers should take heed of the potential competitive advantage such an analysis could bring to light.
 
Assumptions
     To simplify the initial presentation, preparation for and construction of a work balance chart is subject to several assumptions, including:
  • Activity times are known and constant.
  • Takt time is known and constant.
  • Target cycle time equals takt time.
  • Process resources are dedicated (not shared with other processes).
  • Process uses one-piece flow or pull system (no batching or buffering).
  • A single product is manufactured or service performed (no mix).
  • Process is always fully functional (100% availability).
  • Process achieves 100% acceptable quality (no scrap or rework).
  • Existing process evolved with limited analysis.
The effects of these assumptions on work balance analysis are discussed in the “Adjustments for Reality” section.  Once the basic principles of work balance analysis are understood, using an idealized process, assumptions that are no longer valid should be removed.  The resultant WBC will more accurately represent operations, providing opportunities for more effective changes and higher performance.
 
Preparation
     Before an analysis can begin, its boundaries must be established.  Define the process completely, including its start point, end point, and each step between.  Creating a flow chart is a convenient way to document the process definition for reference during analysis.
     Though known, constant activity times are assumed, this information must be collected and organized.  If not already available, a precedence diagram should be created; referencing preexisting diagrams is much more efficient than creating them “on the fly” or, worse, foregoing them (“running blind”).  Activity times can be added to reference diagrams to reduce the number of documents needed during analysis.  Summing the activity times for every step in the process yields the total cycle time or TCT.
     Determine the takt time – the maximum cycle time at which the process is capable of meeting customer demand.  Takt time is calculated by dividing the available work time in a period by the customer demand in that period:
For manufacturing operations, the shipping frequency provides a convenient period for takt time calculation.  In this framing,
The period used for service operations is often a shift, a day, or a week, depending on the duration of the service analyzed.  It is possible, however, that another, “nonstandard” time period is appropriate; its selection is left to the judgment of the analysis team.
     To determine the optimum number of operators, or stations, for the idealized process, divide the total cycle time by the takt time and round up to the nearest integer:
Constructing the WBC
     A work balance chart, like several other diagrams, can be constructed manually or digitally.  Many diagramming efforts benefit significantly from manual construction; speed of development, the cross-section of inputs, and size of the workspace (“canvas”) are typical advantages.  The nature of a WBC, however, often shifts the advantage to digital construction for experienced practitioners, as a description of both methods illuminates.
     Manual construction of a WBC is most valuable for training purposes; advantages in this context include:
  • Tactile engagement reinforces the concept that physical rearrangement may be required to balance a workload.
  • Manual manipulation allows rapid reconfiguration and visual confirmation of completeness (all components accounted for).
  • Physical models are less abstract than digital tools.
  • No computer skills are required.
  • Software may induce unintended limitations on exploration and experimentation that solidifies students’ understanding of concepts.
Materials needed to manually construct a WBC include paper, scissors, ruler, and writing instrument.  An alternative method involving hand-drawn scales on a whiteboard is too imprecise and will not be detailed here (further explanation is probably unnecessary, anyway).
     On a piece of paper, draw a graph with an appropriate time scale, chosen based on the takt time and task times, on the vertical axis.  Along the horizontal axis, place labels for each station or process in sequence from left to right.  For each task, cut a strip of paper to length in proportion to its duration; use the scale established on the graph.  To improve legibility and maintain the proper scale, the graph and task strips can be printed on a computer; these will look similar to the example in Exhibit 1.  Doing so reduces the time required, maintaining focus on the important aspects of the WBC; the ability to draw time scales accurately is the least relevant to a work balancing effort or training.
     On the blank graph, draw a horizontal line at the takt time or target cycle time (they may be different – more on this later).  Use a bright or contrasting color to ensure that the line is easily visible.  Place the task strips on the graph, “stacking” them above the corresponding station or process label.  An example of what this might look like is shown in Exhibit 2.  The example presents the current state of production that clearly exhibits a huge imbalance in the workload.
     To balance the workload among the available stations, rearrange the task strips, targeting equal total task times in each station.  Each task movement is subject to restrictions established by the precedence diagram for the process being analyzed.  The future state of production, with a balanced workload, may look like that presented in Exhibit 3.
     A task eligibility chart can also be created; in it, information contained in the precedence diagram or table is reorganized to be more easily applied directly to the work balance effort.  To see how an eligibility chart is created and utilized, consider the example precedence diagram and table in Exhibit 4 and the derived eligibility chart in Exhibit 5.  In this example, there are 12 tasks to be completed to meet a 57.6 s takt time (the presumed target cycle time).  The number of stations needed is determined by dividing the total task time by the takt time:  252 s/57.6 s = 4.375.  Rounding up, the work is to be balanced among five stations.
     For a task to be eligible for assignment to a station, it must meet all precedence requirements.  A commonly used “rule of thumb” is to assign the eligible task with the longest duration that will not cause the total station time to exceed the target cycle time.  In the example presented, the tasks are assigned in alphabetical order, but this need not be the case; a process with more parallel tasks will have more “mixing” of the task sequences.  The work balance chart in Exhibit 6 provides the graphical representation of the eligibility chart assignment information.
     The WBC in Exhibit 6 was generated in a spreadsheet program; compare it to Exhibit 3.  The output of the manual (hybrid, really) process depicted in Exhibit 3 is functional, but imprecise and aesthetically unsatisfying for presentation.  This is true despite the use of a computer to generate the graph and task strips.  With comparable effort, a spreadsheet template can be created to generate an aesthetically pleasing WBC that automatically adapts to task rearrangement.  In subsequent balancing efforts, use of the template is much more efficient than the manual process.  For this reason, experienced practitioners are encouraged to construct WBCs digitally.  Once the concepts of proper execution are well-understood, physical manipulation no longer adds sufficient value to justify an inefficient process.
     A spreadsheet template can also be created to generate a horizontally “stacked” WBC, such as that shown in Exhibit 7 for the eligibility example.  Exhibit 6 and Exhibit 7 present the same information in slightly different formats and with different connotations.  The vertical “stacks” of Exhibit 6 may evoke the concept of “piling on” or putting an operator under increasing load as additional tasks are assigned to a station.  The horizontal bars of Exhibit 7 tend to be less evocative, portraying the inevitable, and mostly unobjectionable, passage of time.  The notions of “workload” and “timeline,” though equivalent in this context, can elicit very different reactions from varying audiences.  Both formats provide accurate, acceptable presentations of the work balance; the choice between them is made for aesthetic reasons.
Adjustments for Reality
     The examples in the previous section allude to use of work balance charts to improve an existing process (see Exhibits 2 and 3) and for process planning (see Exhibits 4, 5, and 6).  The information that can be known and that which must be estimated differs between these two applications.  For example, data from a time study can be used to balance an existing process, but task times must be estimated in a preliminary process plan (i.e. prior to building equipment).
     Once performance data is available for a new process, the workload may require rebalancing.  This is often done assuming a constant activity (task) time; the average of recorded cycles is typically used.  However, this may not be the most effective choice; an “accordion effect” of varying task durations may induce erratic fluctuations in the workflow.
     These fluctuations can be accommodated in the target cycle time.  Activity times are typically normally distributed; this can be verified in the performance data prior to implementing the following strategy.  Consider a hypothetical task for which time study data reveal an average duration of 30 s and standard deviation of 4 s [μ = 30, σ = 4].  To smooth the workflow, the task time “standard” is set to encompass 90% of cycles.  As the normal distribution for this example, Exhibit 8, shows, this task duration is set at 35.1 s [P(x < 0.9); z = 1.282].
     Variation in activity time is unavoidable for manual tasks; thus wait time (waste) will exist in a closely-coupled process.  Monitoring and evaluation of productivity and operators’ frustration with the system is required to choose an appropriate target cycle time.  Designing a system to operate at the average task time ensures that only 50% of cycles will meet the design criterion!  The system output will meet design intent only when the average task time is achieved at every step of the process.
     This tension is relieved if physical buffers are built into the system or output from stations is batched; these practices have an equivalent effect.  Variation in task time does not influence downstream operations; the average task time is the most relevant metric for most systems of this type.
 
     Takt time may also vary; this is the nature of seasonal demand, for example.  Periods of reduced demand can be accommodated in a few ways:
  • Produce at the average takt time (not possible for services) regardless of the current demand.  Carried inventory compensates for periods of production/demand mismatch.
  • Limit the operating schedule to match total output to demand (system designed for maximum demand).
  • Reduce the number of operators and rebalance the workload for the higher takt time.  A single process may use several WBCs to match output to varying demand.  Seasonal demand may be satisfied with “winter work balance,” “summer work balance,” and “spring/fall work balance” configurations.  Any number of WBCs can be created to manage fluctuating demand.
 
     Several of the assumptions upon which the WBC development presentation was based are interconnected; it is difficult to discuss one without invoking others.  The assumption that “target cycle time equals takt time” may be removed for many reasons, some of which were presented in other assumptions.  If resources – personnel, equipment, etc. – are shared with another process, the target cycle time may be reduced so that sufficient time is available to meet the demand for both processes.  This creates a situation similar to introducing product mix into a process, precluded from this discussion by another assumption.  This topic is best left to a future installment; adequate exploration is beyond this scope of this presentation.
 
     The assumptions of “100% process availability” and “100% acceptable quality,” like that of constant activity times, were made to avoid confusing those new to line balancing.  They must now be adjusted, however, to develop realistic expectations of process performance.
     Quality and availability are two legs of the OEE (Overall Equipment Effectiveness) “stool.”  Achieving 100% performance in both measures for an extended period of time is unlikely for a system of any sophistication.  Therefore, the target cycle time must be adjusted to accommodate the actual or anticipated performance of the system.
     The reliability of a system effects its availability and directly influences the numerator of the takt time calculation.  Output of unacceptable quality is accounted for, indirectly, in the denominator by effectively increasing the demand (an additional unit must be produced to replace each faulty unit).  The modified calculation can be presented as:
     The third leg of the OEE stool is productivity, to which the task time variation discussion and Exhibit 8 allude.  An example occurrence of these three losses is depicted in Exhibit 9.  Adjusting target cycle time based on OEE is an alternative method (“shortcut”) of compensating for these losses when they are known or can be estimated with reasonable accuracy.  For this method, the target cycle time is calculated as follows:
These calculations are, once again, based on the assumption that resources are dedicated to a single process.  For process planning, the target cycle time can be calculated using a “world-class” OEE of 85%, an industry average, or other reasonable expectation.
     The final assumption stated involves continuous improvement (CI) efforts.  Processes evolve to accommodate changing customer requirements, material availability, and other influences on production.  Many times, process changes are implemented with incomplete analysis, whether due to urgency or oversight, resulting in a system that is inefficient and unbalanced.  Learning curves may also effect tasks differently; experience may improve performance in some tasks more than others.  Including this assumption in the discussion serves as a reminder that line balancing is a CI effort involving both efficiency and arrangement.
 
Additional Notes on Balancing
     If no satisfactory balance can be found without exceeding the target cycle time, there are several approaches available, including:
  • Increase operating time, adjusting takt time and target cycle time to an achievable production rate.
  • Increase the number of stations, lowering total station time below the target.
  • Implement parallel processing of the longest-duration tasks by adding equipment and/or operators.
  • Conduct a detailed time & motion study to discover opportunities for increased productivity (e.g. left-hand/right-hand operations, improved sequencing).
  • Increase operating speed of equipment via refurbishment, replacement, or new technology.
  • Balance all stations except the last.  The excess (waiting) time in the final station can be used to restock materials, perform a task in another process, or other creative use.  It also serves as incentive to further improve the system for efficient, balanced operation.  Alternatively, it represents capacity for additional content, such as new product features or service components.
  • Increase sale price to lower demand to acceptable rate while maintaining profitability.
 
     The digitally-generated WBC examples (Exhibits 6 and 7) were created with “traditional” spreadsheet data presentation and charting tools.  Pivot tables can also be used to organize data and generate charts; however, their use requires additional skills and manual updates of charts.  If one is sufficiently skilled and comfortable in the use of pivot tables, it is a viable option, though the advanced users that benefit from their use is probably a small fraction of practitioners.
 
Notes on Simulation
     Simulation software can be used to facilitate line balancing and other operational assessments.  Work balancing projects like those described here will usually not benefit greatly from the additional effort that simulation requires; highly sophisticated systems may warrant it, however.  A complex product mix, highly variable task durations, complex maintenance schedules, and unpredictable demand may complicate the analysis to a sufficient degree to justify the use of simulation software.
     A spreadsheet program can also be useful for “what if” type experimentation and is sufficient for most line balancing projects.  Monte Carlo simulation, distribution analysis, and other simple functions can also be performed in a spreadsheet.
 
 
     Approaching the limits of production capability requires the most complete and accurate information possible.  It is imperative to account for variability in human task performance, equipment reliability, quality attainment, predictability of demand, and other factors in process planning and development.  Increasing efficiency and improved work balance are circular – each supports the other – and should be pursued in conjunction whenever feasible.
 
     For additional guidance or assistance with line balancing, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I: An Introduction to Business Mapping (25Sep2019).
 
References
[Link] Line Balancing Series.  Christoph Roser. All About Lean, January 26, 2016.
[Link] “The Balancing Act:  An Example of Line Balancing.”  Brian Harrington.  Simul8.
[Link] “Operator Balance Chart.”  Lean Enterprise Institute.
[Link] “Understanding the Yamazumi Chart.”  OpEx Learning Team; July 19, 2018.
[Link] “What Is Line Balancing & How To Achieve It.”  Tulip.
[Link] “Lean Line Balancing in the IT Sector.”  Rupesh Lochan.  March 9, 2011; iSixSigma.
[Link] Normal Distribution Generator.  Matt Bognar.  University of Iowa, 2021.
[Link] Normal Distributions.
[Link] The New Lean pocket Guide XL.  Don Tapping; MCS Media, Inc., 2006.
[Link] The Lean 3P Advantage.  Allan R. Coletta; CRC Press, 2012.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Commercial Cartography – Vol. IX:  Precedence Diagram]]>Wed, 22 Mar 2023 04:00:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-ix-precedence-diagram     A precedence diagram is a building block for more advanced techniques in operations and project management.  Precedence diagrams are used as inputs to PERT and Gantt charts, line balancing, and Critical Path Method (topics of future installments of “The Third Degree.”)
     Many resources discuss precedence diagramming as a component of the techniques mentioned above.  However, the fact that it can be used for each of these purposes, and others, warrants a separate treatment of the topic.  Separate treatment is also intended to encourage reuse, increasing the value of each diagram created.
     A discussion of the nature of task dependencies is a useful preface to one on precedence diagramming.  While it is conceivable that a useful precedence diagram could be created without a deep understanding of dependencies, it would be consequentially suboptimal.  If dependencies are not well-understood, contingency planning and decision-making are much less effective.  Task, or activity, dependencies are defined by two binary characteristics:  mandatory vs. discretionary and external vs. internal.
     A mandatory task dependency often involves a physical limitation; i.e. it is not possible to perform Task B until Task A is complete.  For example, it is not possible to frame a house until the foundation has been poured, or to roof it before it has been framed.  It may also involve a legal, regulatory, or contractual obligation; i.e. the “tasker” – the individual, group, or organization performing the task – is bound by law or agreement to adhere to a specified task sequence.  Continuing the house-building example, inspections of electrical and plumbing installations must be complete, to verify compliance with building codes, before sheetrock can be hung.  A contractual mandatory dependency is created when the bank financing the construction requires specified milestones to be reached and approved before subsequent funds are released.
     A discretionary dependency is a procedural choice defined by the tasker or customer.  It often represents an accepted best practice, such as the most efficient use of resources to complete a given series of tasks; it could also simply be a preference.  A builder may prefer to paint all rooms before hanging any wallpaper, but a shortage of paint for the living room need not delay wallpapering of the dining room.  Discretionary dependencies reflect desires, but flexibility to respond to changing circumstances is retained.
     An external dependency exists where a “non-project” milestone must be reached before a project activity can begin.  In the example above, electrical and plumbing inspections are external activities that must be complete before drywalling (internal activity) can begin.
     An internal dependency involves only project work and milestones.  Stated another way, internal dependencies are relationships between activities within the tasker’s control.  As such, activities with internal dependencies may be subject to expediting efforts.
     Possible combinations of these characteristics define four dependency types:
  • mandatory external – the tasker has little to no influence on the achievement of the required milestone.  The electrical and plumbing inspection required prior to drywalling, as mentioned previously, is a mandatory external dependency.
  • mandatory internal – the tasker cannot modify the requirement, but can influence when the milestone is reached.  Packaging of a product prior to shipping is a mandatory internal dependency.
  • discretionary external – the tasker can choose to modify a task sequence or milestone requirement despite the non-project work involved.  For example, a third-party assessment should be completed before project activities begin.  However, failure to reach the non-project milestone (completed assessment) does not prevent commencement of project work.  Nothing prevents a prospective buyer from making a real estate purchase offer prior to receiving an appraisal of the property, though it is clearly advisable to wait for it in most situations.
  • discretionary internal – the tasker can exert the greatest influence on the activity sequence and milestone achievement by modifying how and when project activity is conducted.  A product development plan may call for finite element analysis (FEA) to be complete before a prototype is built.  If multiple prototypes are planned, the first may be built before the design is finalized to expedite activities less dependent on the FEA results, such as aesthetic assessments or packaging trials.
An obligatory 2 x 2 matrix summarizing the four types of task dependency is provided in Exhibit 1.
     Understanding dependency aids decision-making when adapting project execution to changing circumstances.  To establish precedence, a temporal component must be added; the temporal information is the key component of a precedence diagramPrecedence describes the logical relationship between predecessor and successor activities in a project or process.  There are four possibilities:
  • Finish-to-Start (FS)predecessor activity must be complete to begin successor activity.  Examples in the dependency discussion above were described as FS constraints; a sequential series of tasks is common and intuitive.
  • Finish-to-Finish (FF)predecessor activity must be complete to complete successor activity.  Drywall spackling must be complete to finish painting the walls.
  • Start-to-Start (SS)predecessor activity must begin to begin successor activity.  Mortar mixing must begin in order for bricklaying to commence.  Both Lights! and Camera! must begin before the Action!! can commence.
  • Start-to-Finish (SF)predecessor activity must begin to complete successor activity; an uncommon precedence relationship.  New mobile phone service must be activated before the old service can be disconnected to avoid interruption of service.
     A graphical representation of each precedence relationship is provided in Exhibit 2Exhibit 3 demonstrates how activities can be shown to be subject to multiple precedence relationships.
     Precedence diagrams can be drawn in two pictorial formats – Activity on Node (AON) and Activity on Arrow (AOA); both are types of network diagram.  In an AON diagram, the task or activity descriptions are placed at the nodes and the arrows represent precedence relationships.  The example in Exhibit 4 has no precedence identifiers on the arrows; therefore, it is assumed that only FS constraints exist in this task sequence.
     In an AOA diagram, activity descriptions are placed on the arrows and the nodes serve as milestones.  As shown in Exhibit 5, an AOA diagram may require “dummy” activities to represent additional precedence relationships.  In this example, the dummy activity is added to show that both Activity A and Activity B are predecessors of Activity C.  Again, a purely sequential execution is assumed because no precedence notation has been used; FS is the default.
     A third method of presenting precedence relationships is in tabular format.  An example precedence table, created by a tennis tournament planner, is shown in Exhibit 6.  Each activity is described and assigned a code to simplify references to it.  Predecessor activities are then identified by assigned codes.  A pictorial diagram can be generated from the information contained in this table in either AON format (Exhibit 7) or AOA format (Exhibit 8).  The reverse is also true; precedence information can be transferred from a pictorial diagram to a table.
     The choice of format(s) to use is usually a simple preference.  Many find the AON diagram intuitive and easy to use, while dummy activities may be more difficult to process rapidly.  An experienced practitioner may choose to forego a graphical diagram; it is a simple matter to enter the information in a spreadsheet, but graphical capabilities may be limited or cumbersome.  If one can process the information with sufficient ease in tabular format, a graphical diagram is unnecessary.
     The diagrams in Exhibit 7 and Exhibit 8 use the codes assigned in the table of Exhibit 6 rather than full activity descriptions.  The subscript number attached to each activity code is its estimated duration, found in the rightmost column of the table.  Durations are not required on precedence diagrams and are normally added only when advanced techniques, mentioned in the introduction, are employed.  The use of estimated durations will be discussed in future installments when these techniques are explored.
 
     At times in this presentation, constraint has been used as a synonym for precedence relationship.  This is a valid substitution, as precedence requirements create constraints on the execution of a task sequence, limiting flexibility available for the tasker to exploit.
     Some resources refer to precedence relationships, as defined here, as dependencies; the performance of one task is dependent on the performance of another.  If the context is understood, the overlapping terminology is not catastrophic.  Differentiating between dependency and precedence and including the discussion here was chosen because it seemed the best fit for the information.  Both are fundamental building blocks of the advanced techniques to be presented later.
 
     Precedence diagrams are used to document production and construction processes, in project management, and in event planning.  One could even be used to tame the chaos of a busy family’s daily life.  Managing soccer games, band practice, visits to the veterinarian, and PTA meetings may be less daunting if these activities are clearly sequenced in advance.
     Readers are encouraged to experiment with different presentation formats and practice identifying dependencies and precedence relationships.  Planning daily activities is a judicious use of a new tool.  Ideally, the first application of any technique is a low-risk endeavor, providing a comfortable learning curve, limited potential consequences, and a solid foundation on which to build skills with advanced techniques.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I: An Introduction to Business Mapping (25Sep2019).
 
References
[Link] PMBOK® Guide - Sixth Edition.  Project Management Institute, 2017.
[Link] “PDM – Precedence Diagramming Method [FS, FF, SS, SF] (+ Example).” Project-Management.info.
[Link] “Precedence Diagram Method (PDM).”  AcqNotes, January 1, 2023.
[Link] “Arrow diagramming method.”  Wikipedia, September 18, 2021.
[Link] Service Management, 8ed.  James A. Fitzsimmons, Mona J. Fitzsimmons, and Sanjeev K. Bordoloi.  McGraw-Hill/Irwin, 2014.
[Link] “Look at four ways to set precedence relationships in your schedule.”  tommochal.  TechRepublic, January 28, 2008.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>