<![CDATA[JayWink Solutions - Blog]]>Tue, 26 Sep 2023 15:54:46 -0400Weebly<![CDATA[Occupational Soundscapes – Part 5:  Audiometry]]>Wed, 20 Sep 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-5-audiometry     Audiometry is the measurement of individuals’ hearing sensitivity using finely-regulated sound inputs.  It is a crucial component of a hearing loss prevention program (HLPP) with an emphasis on the range of frequencies prevalent in speech communication.  To be valid, audiometric testing must be conducted under controlled conditions and the results interpreted by a knowledgeable technician or audiologist.
     This installment of the “Occupational Soundscapes” series provides an introduction to audiometry, requirements for equipment, facilities, and personnel involved in audiometric testing, and the presentation and interpretation of test results.  It targets, primarily, those enrolled in – as opposed to responsible for – an HLPP.  Its purpose is to develop a basic understanding of a critical component of hearing conservation efforts to, in turn, engender confidence in the administration of procedures that may be foreign to many who undergo them.
The Audiologist
     Audiometric testing is conducted by an audiologist, audiometric technician, audiometrist, or physician.  Distinctions among these roles are not important to the present discussion; therefore, the term audiologist will be applied to any competent administrator of audiometric tests.
     Demonstration of competency in audiometric testing is typically achieved by attaining certification from the Council for Accreditation in Occupational Hearing Conservation (CAOHC) or equivalent body.  Physicians, such as otolaryngologists, are certified by their respective medical boards.
     The Occupational Safety and Health Administration (OSHA) requires that audiometric testing be administered by a licensed or certified audiologist, physician, or technician capable of obtaining valid audiograms (see “The Results,” below) and maintaining test equipment in proper working order.  OSHA does not require technicians operating microprocessor-controlled (i.e. automated) audiometers (see “The Equipment,” below) to be certified, but the National Institute for Occupational Safety and Health (NIOSH) rejects this exemption.
 
The Facility
     Audiometric testing is typically conducted in one of three types of test facility – onsite, mobile, or clinical.  Each has unique characteristics that must be considered to determine which is best-suited to an organization and its HLPP.
     An onsite test facility utilizes dedicated space within an organization where an audiometric test booth is permanently installed.  An onsite facility is typically feasible only for large organizations with more than 500 noise-exposed employees enrolled in an HLPP at a single location.  Dedicated facilities often require full-time professional staff, further limiting the range of organizations for which onsite facilities are appropriate.
     Mobile test facilities may be provided by a third-party contractor to support an organization’s HLPP.  This may be an appropriate solution for an organization with multiple operations throughout a region when the number of employees enrolled in the HLPP at each location is relatively small.
     A clinical test facility is an independent medical or occupational health practice.  Employees schedule audiometric tests as they would an eye exam, annual physical checkup, or other outpatient procedure.  For smaller entities or programs, this is often the most practical choice.  Administration by an independent brick-and-mortar medical practice may also increase employees’ confidence in the HLPP, providing a psychological benefit that is difficult to quantify.
     The facility, regardless of the type chosen, must be sufficiently isolated to prevent interference with audiometric testing.  Vibrations, ambient noise, and distracting sounds must be minimized to ensure a valid audiogramMaximum Permissible Ambient Noise Levels (MPANLs) are defined in standards and regulations (e.g. ANSI S3.1, CFR29 Part 1910.95) for various types of test equipment.  It is important to note that sounds below the required MPANL, such as phone alerts, conversation, or traffic, can still be distracting and should be avoided.
 
The Equipment
     The two pieces of equipment most relevant to this discussion are the audiometer and the earphone.  There are three types of audiometer that may be encountered in an HLPP – manual, self-recording, and computer-controlled.  In the context of occupational hearing conservation, pure-tone, air-conduction audiometers are used; other types (e.g bone-conduction) may be utilized for advanced analysis and diagnosis.
     Using a manual audiometer, the audiologist retains control of the frequency, level, and presentation of tones and manually records results.  This is the least sophisticated, thus least expensive, type of audiometer.  It is also the most reliant upon an audiologist’s skill and concentration.
     A self-recording, or Békésy audiometer (named for its inventor) controls the frequency and level of tones, varying each according to test-subject’s responses; test results can be difficult to interpret.  This type of audiometer is no longer in common use in occupational HLPPs; its use is more common in research settings where its finer increments of tone frequency and level control are advantageous.
     Computer-controlled audiometers are prevalent in modern practice.  Continually-advancing technology has improved reliability and added automated functions, such as data collection, report generation, and test interruption for excessive ambient noise.  Stand-alone units may be called microprocessor audiometers; they also perform automated tests, but have fewer capabilities and cannot be upgraded as easily as software residing on a PC.
     There are also three types of earphone available for audiometric testingsupra-aural, circumaural, and insert.  A more-precise (“technical”) term for an earphone is “transducer;” “headset” or “earpiece” is more colloquial.
     Supra-aural earphones consist of two transducers, connected by a headband, that rest on the test subject’s outer ears; no seal is created.  Therefore, little attenuation is provided, requiring increased diligence in control of ambient sounds.
     Circumaural earphones consist of two transducers, housed in padded “earmuffs” that surround the ears, connected by a headband.  The seal provided by the earmuffs, though imperfect, provides significantly greater attenuation of ambient sound than supra-aural earphones.
     Insert earphones consist of flexible tubes attached to foam tips that are inserted in the ear canal.  The foam tip seals the ear canal, providing the greatest attenuation of ambient sound.  Test tones are delivered directly to each ear via the flexible tubes; the lack of physical connection between the transducers reduces the opportunity for transmission of tones from the tested ear to the “silent” ear.
     Some test subjects may experience discomfort, particularly when using insert earphones, which could lead to distraction that influences test results.  Recognizing signs of discomfort, distraction, or other interference is among the required skillset of an effective audiologist.
     Evidence suggests that the choice of earphone does not significantly affect test reliability.  However, earphones and audiometers are not interchangeable; an audiometer must be calibrated in conjunction with a paired earphone to provide valid test results.
 
The Test
     A typical audiometric test does not evaluate the entire frequency range of human hearing capability (~20 ~ 20,000 Hz).  Instead, the focus of testing is on the range of critical speech frequencies introduced in Part 2 of the series.  Specific test frequencies used are 500, 1000, 2000, 3000, 4000, and 6000 Hz.  Testing at 8000 Hz is also recommended for its potential diagnostic value; testing at 250 Hz may also be included.
     Each ear is tested independently by delivering pure tones at each frequency and varying levels, usually in 5 dB increments.  The minimum level at which a subject can hear a tone a specified proportion of the times it is presented (e.g. 2 of 3 or 3 of 5) is the person’s hearing threshold at that frequency.  Consecutive tests indicating thresholds within ±5 dB are typically treated as “consistent,” as this level of variability is inherent to the test.
     A single audiometric test may identify a concern, but multiple tests are needed to identify causes and determine appropriate actions.  The first test conducted establishes the subject’s baseline hearing sensitivity.  The subject should limit exposure to less than 80 dB SPL for a minimum of 14 hours prior to a baseline test, without the use of hearing protection devices (HPDs).  Some test protocols reduce the quiet period to 12 hours minimum or allow use of HPDs, but an extended period of “unprotected rest” is preferred.
     A baseline test is required within 6 months of an employee’s first exposure to the loud environment, though sooner is better.  Ideally, a baseline is established prior to the first exposure, thus eliminating any potential influence on the test results.
     Monitoring tests are conducted annually, at minimum.  They are often called, simply, annual tests, though more frequent testing is warranted, or even required, in some circumstances.  Monitoring tests are conducted without a preceding “rest” period, at the end of a work shift, for example.  Doing so provides information related to the effectiveness of HPDs, training deficiencies, etc.
     A retest is conducted immediately following a monitoring test indicating a 15 dB or greater hearing loss in either ear at any of the required test frequencies.  This is done to correct erroneous results caused by poor earphone fitment, abnormal noise intrusions, or other anomaly in the test procedure.
     A confirmation test is conducted within 30 days of a monitoring test indicating a significant threshold shift (discussed further in “The Results,” below).  Confirmation test protocols mimic those of a baseline test to allow direct comparison.
     Exit tests are conducted when an employee is no longer exposed to the loud environment.  This may also be called a transfer test when the cessation of exposure is due to a change of jobs within the organization, rather than termination of employment.  Exit test protocols also mimic those of a baseline test, facilitating assessment of the impact of working conditions over the course of the subject’s entire tenure.
 
The Results
     The results of an audiometric test are recorded on an audiogram; a blank form is shown in Exhibit 1.  Tone frequencies (Hz) are listed on the horizontal axis, increasing from left to right.  On the vertical axis, increasing from top to bottom, is the sound intensity level scale (dB).  This choice of format aligns with the concept of hearing sensitivity; points lower on the chart represent higher intensity levels required for a subject to hear a sound and, thus, lower sensitivity to the tested frequency.
     The audiogram shown in Exhibit 2 places examples of familiar sounds in relative positions of frequency and intensity.  Of particular interest is the “speech banana” – the area shaded in yellow that represents typical speech communications.  Presented this way, it is easy to see why differentiating between the letters “b” and “d” can be difficult.  These letters hold adjacent positions at the lower end of the speech frequency range, where several other speech sounds are also clustered.  This diagram also reinforces the idea that the ability to hear chirping birds and whispering voices are among the first to be lost; they are high-frequency, low-intensity sounds.
     Visual differentiation of data for each ear is achieved by using symbols and colors.  Each data point for a subject’s left ear is represented by an “X,” while each data point for the right ear is represented by an “O.”  Colors are not required; when they are used, the convention is to show left-ear data in blue and right-ear data in red.  The increased visual discrimination facilitates rapid interpretation of test results, particularly when all data for a subject are shown in a single diagram.  When baseline data are shown on a monitoring audiogram, the baseline data is typically shown in grey to differentiate between historical and current test data.
     The vertical scale represents a person’s hearing threshold – the minimum sound intensity level required for the test tone to be heard.  An example audiogram, representing “normal” hearing using the formatting conventions described above, is shown in Exhibit 3.  Sound stimuli above the line on the audiogram are inaudible; only those on or below the line can be heard by the subject.  Widely-accepted definitions of the extent of hearing loss are as follows:
  • normal hearing:  < 20 dB hearing level;
  • mild hearing loss:  20 – 40 dB hearing level;
  • moderate hearing loss:  40 – 70 dB hearing level;
  • severe hearing loss:  70 – 90 dB hearing level;
  • profound hearing loss:  > 90 dB hearing level.
     The example audiogram in Exhibit 4 also demonstrates the use of symbols and colors to differentiate data, though the dual-chart format makes it less critical.  The data is also tabulated to provide precise threshold levels for each frequency.
     A significant drop in sensitivity, in both ears, at 4000 Hz is depicted in Exhibit 4.  This is the infamous “4K notch,” indicative of noise-induced hearing loss (NIHL).  The appearance of this notch or other deviation from normal hearing should elicit an appropriate response.
     The presence of a notch in a baseline audiogram suggests that permanent hearing loss has already occurred.  Appropriate measures must be taken to ensure that no further damage occurs.  Furthermore, additional assessments may be necessary to ensure that the subject’s abilities are compatible with the work environment.  If diminished communication abilities creates a hazard for the subject or others, for example, an appropriate reassignment should be sought.
     The appearance of a notch or other decline in hearing sensitivity in a monitoring audiogram should trigger follow-up testing.  A retest is conducted to ensure the validity of the data by verifying that the facility and equipment are operating within specifications and the test was conducted properly by both the subject and audiologist.  NIOSH recommends retesting when a monitoring audiogram indicates a 15 dB or greater increase in hearing level, relative to the baseline audiogram, at any frequency from 500 to 6000 Hz.
     If the monitoring and retest audiograms are consistent, two parallel paths are followed.  On one path, the subject undergoes a confirmation test to determine if the indicated hearing loss is permanent.  Appropriate follow-up actions are determined according to the results of this test.
     On the other path, HPD use and effectiveness is reviewed to determine necessary changes to the individual’s work process or to the HLPP more broadly.  Other changes to the work environment may also be necessary; noise-control strategies will be discussed further in future installments of this series.
     The decline in hearing sensitivity represented by a lower line on an audiogram is called a threshold shift.  When the arithmetic average of the differences between the baseline and monitoring audiograms at 2000, 3000, and 4000 Hz exceeds 10 dB in either ear, a standard threshold shift (STS) has occurred.  An STS is depicted in the comparative audiogram of Exhibit 5; calculation of the shift’s magnitude is shown in the table.
     If the change in hearing sensitivity is shown by confirmation testing to be irreversible, a permanent threshold shift (PTS) has occurred.  Some level of hearing loss is recoverable with “rest” in a quiet setting.  This change is called a temporary threshold shift (TTS).  Appropriate action must be taken to prevent a TTS from becoming a PTS.
     A baseline audiogram represents a person’s “best-case” hearing or maximum sensitivity.  Therefore, if subsequent testing results in a “better” audiogram than the baseline, the baseline is replaced by the new audiogram.  This can occur if influences on the baseline test were not noticed or properly addressed.  Examples include an insufficient rest period preceding the test, intrusive noise or vibration in the test chamber, and suboptimal earphone fitment.
     Results other than a pronounced 4K notch can also prompt additional testing.  The series’ focus remains on NIHL; therefore, only a brief overview will be provided here.  Interested readers are encouraged to consult other sources for additional information.
     Bone-conduction testing is performed with transducers placed behind the ears.  This type of test may be warranted to diagnose an occlusion of the ear canal, which can include impacted cerumen (“earwax”), or other condition of the outer or middle ear that limits air-conducted hearing.  Conductive hearing loss is suggested by differences between air- and bone-conduction thresholds of greater than 10 dB.  An example audiogram depicting this condition in one ear is shown in Exhibit 6.
     A positively-sloped audiogram, such as that shown in Exhibit 7, depicts higher sensitivity to higher frequencies, often indicative of a disorder of the middle or inner ear.  In the case of Meniere’s disease, for example, audiometric testing may be used to validate a medical diagnosis, whereas the reverse is often true for other conditions.
     A negatively-sloped audiogram, such as that shown in Exhibit 8, depicts lower sensitivity to higher frequencies, often indicative of the advancement of presbycusis (age-related hearing loss).  Guidance on the appropriate use of an audiogram of this nature in the context of an HLPP varies.  A non-mandatory age-adjustment procedure remains in the OSHA standard (CFR 29 Part 1910.95 Appendix F), though NIOSH has rescinded support for the practice of “age correction”.  Organizations utilizing age-adjusted audiograms should consider that OSHA regulations tend to follow NIOSH recommendations; the lag on this specific matter has been quite long already.
The Bottom Line
     Noise-induced hearing loss (NIHL) is the accumulation of irreparable damage to the inner ear, particularly the fine hairs of the cochlea (see Part 2).  Hearing loss usually occurs in higher frequencies first.  The focus of audiometric testing on speech communication leads us to define “high frequency” as the 3000 – 6000 Hz range, where the 4K notch is of particular concern.  Hearing loss in frequencies above 8000 Hz often go undiagnosed as the highest frequencies in the audible range are rarely tested.
     NIHL is one of several possible causes of hearing impairment.  Other causes include hereditary conditions, exposure to ototoxic substances, and illness (i.e. infection).  The various audiometric tests are valuable tools beyond the scope of NIHL; they can also aid diagnosis of several other conditions.  For example, a baseline audiogram may confirm the presence of a congenital disorder, or a confirmation test may reveal that an STS was caused by an illness from which, in the interim, the subject had recovered.
     A thorough, well-crafted health and wellness program will include audiometric testing.  In addition to the direct benefits of an HLPP, information about other conditions may also be obtained, further improving the work environment.  Psychological well-being of employees can be improved via increased effectiveness of verbal and nonverbal communication, in addition to the physical health benefits that participation in such a program can provide.


     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] “Hearing Protection.”  Laborers-AGC Education and Training Fund; July 2000.
[Link] “Criteria for a Recommended Standard - Occupational Noise Exposure, Revised Criteria 1998.”  Publication No. 98-126, NIOSH, June 1998.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.
[Link] ”29 CFR 1910.95 - Occupational noise exposure.’  OSHA.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Pediatric Audiology:  A Review.”  Ryan B. Gregg, Lori S. Wiorek, and Joan C. Arvedson.  Pediatrics in Review, July 2004.
[Link] “Familiar Sounds Audiogram:  Understanding Your Child’s Hearing.”  Hearing First, 2021.
[Link] “Hearing and Speech.”  University of California – San Francisco, Department of Otolaryngology – Head and Neck Surgery.
[Link] “Audiograms.”  ENT Education Swansea.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 4:  Sound Math]]>Wed, 06 Sep 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-4-sound-math     Occupational soundscapes, as outlined in Part 1, are comprised of many sounds.  Each has a unique source and set of defining characteristics.  For some purposes, treating all sounds in combination may be appropriate.  For others, the ability to isolate sounds is integral to the purpose of measuring sound levels.
     Of particular importance to a hearing loss prevention program (HLPP) is the ability to add, subtract, and average contributions to the sound pressure level (LP, SPL) in a workplace.  The ratios and logarithms used to calculate SPLs, presented in Part 3, complicate the arithmetic, but only moderately.  This installment of the “Occupational Soundscapes” series introduces the mathematics of sound, enabling readers to evaluate multiple sound sources present in workers’ environs.
     As mentioned in Part 3, sound pressure is influenced by the environment.  The number of sources, the sound power generated by each, and one’s location relative to each source contribute to the sound pressure level to which a person is subjected.

SPL Addition
     A representative example of a typical application will be used to place SPL addition in context.  This should make it easier to understand the process and its value to hearing conservation efforts.  Consider a manufacturing operation with several machines running in one department.  The company’s industrial hygienist has tasked the department manager with reducing the noise to which operators are exposed.  With a capital budget insufficient to replace machines or make significant modifications to the facility, the manager concludes that the only feasible option is to schedule work within the department such that, at all times, some machines remain idle (i.e. quiet).  To determine a machine schedule that will yield acceptable noise exposures while meeting production demands, SPLs generated by each machine are added in various combinations.
     A baseline for comparison must be established to evaluate sound-level reduction results.  The baseline SPL includes sounds from all sources and can be established by either the formula method or the table method.
     Using the formula method, the total (i.e. combined) SPL generated by n sources is calculated with the following equation:
where LPt is the total SPL and LPi is the SPL generated by the ith source.  The calculation of total SPL for our example, which includes five machines, is tabulated in Exhibit 1, where it is found to be 96.8 dB.
     To use the table method, first sort the source SPLs in descending order.  Compare the two highest SPLs and determine the difference.  Find this value in the left column of the table in Exhibit 2 and the corresponding value in the right column.  Interpolation may be necessary, as only integer differences are tabulated.  Add the value from the right column to the higher SPL to obtain the combined SPL to be used in subsequent iterations.
     Compare the combined SPL to the next source in the sorted list, repeating the process described until all source SPLs have been added or they no longer contribute to the total SPL.  When adding a large number sources, sorting SPLs first may allow the process to be abbreviated; once the difference between the combined SPL and the next source exceeds 10 dB, the remainder of the list need not be considered.
     A pictorial representation of the cascading calculations performed in the table method of SPL addition is provided in Exhibit 3, where the total SPL for our example is found to be 96.5 dB.  This result differs slightly from that attained by the more-precise formula method, but this need not be a concern.  The reduced complexity of computation often justifies the sacrifice of accuracy.  A 0.3 dB difference, like that found in our example, is imperceptible to humans and is, therefore, inconsequential.  While some circumstances warrant use of the formula method, the table method of SPL addition provides a convenient estimate without the need for a calculator.
     The department manager proposes running the machines in two groups – machines 1, 3, and 5 will run simultaneously, alternating with machines 2 and 4.  Total SPL calculations for each machine grouping, using the formula method and the table method, are shown in Exhibit 4 and Exhibit 5, respectively.
     Total SPL results are the same for both methods – 93.3 dB for machine group (1, 3, 5) and 94.3 dB for machine group (2, 4).  This represents 3.5 dB and 2.5 dB reductions, respectively, from the baseline SPL (all machines running).  These are consequential reductions in noise exposure; the proposal is accepted and the new machine operation schedule is implemented.  The total SPL remains high, however, and further improvements should be sought.

SPL Subtraction
     To determine the SPL contribution of a single source, the “background” SPL is subtracted from the total.  Like SPL addition, there are two methods.
     Consider the 5-machine department presented in the SPL addition example; for this example, the SPL attributed to machine 3 is unknown.  With machine 3 turned off, the total SPL in the department is 95.5 dB; this is considered the background level with respect to machine 3.  Recall that the total SPL with all machines running is 96.8 dB.
     To subtract the background SPL by the formula method, use the following equation:
where LPi is the SPL of the source of interest, LPt is the total SPL (all machines running), and LPb is the background SPL (source of concern turned off).  In our example, LP3 = 10 log (10^9.68 – 10^9.55) = 90.9 dB.  The result can be verified using SPL addition:  LPt = 10 log (10^9.55 + 10^9.09) = 96.8 dB.
     To determine the SPL attributed to machine 3 by the table method, find the difference between the total and background SPLs (96.8 dB – 95.5 dB = 1.3 dB) in the left column of the table in Exhibit 6.  Subtracting the corresponding value in the right column of the table (~ 6.0 dB) from the total SPL gives the machine 3 SPL (96.8 dB – 6.0 dB = 90.8 dB).  Again, interpolation causes a small variance in the results that remains inconsequential.

SPL Averaging

     Measurements may be repeated across time or varying conditions.  In our 5-machine department example, this may be to document different machine combinations or sounds generated during specific operations.  In the latter scenario, an average SPL may be a useful, though simplified, characterization of the environment.
     SPLs are averaged using the following formula:
where n is the number of measurements to be averaged and LPi is an individual measurement.
     As an example, the SPLs of the 5-machine department example will be reinterpreted as multiple measurements in a single location.  Averaging the five SPL values (88.0, 92.5, 91.0, 89.5, and 83.5 dB) using the equation above gives LPavg = 88.9 dB.  When the range of SPLs to be averaged is small (e.g. < 5 dB), the arithmetic average can be used to approximate the decibel average.  The arithmetic and decibel average calculations for this example are shown in Exhibit 7.  Arithmetic averaging provides a convenient estimation method, but the decibel average should be calculated for any “official” purpose, as the two rapidly diverge.

     In the coming installments of the “Occupational Soundscapes” series, the connections between previous topics and hearing conservation begin to strengthen.  The discussion of audiometry brings together the physiological functioning of the ear (Part 2), speech intelligibility (introduced in Part 2), and the decibel scale (Part 3) to lay the foundation of a hearing loss prevention program.

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] An Introduction to Acoustics.  Robert H. Randall.  Addison-Wesley; 1951.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Noise Navigator Sound Level Database, v1.8.” Elliot H. Berger, Rick Neitzel, and Cynthia A. Kladden.  3M Personal Safety Division; June 26, 2015.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 3:  The Decibel Scale]]>Wed, 23 Aug 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-3-the-decibel-scale     In all likelihood, readers of this series have encountered the decibel scale many times.  It may have been used in the specifications of new machinery or personal electronic devices.  Some may be able to intuit the practical application of these values, but it is likely that many lack knowledge of the true meaning and implications of the decibel scale.
     This installment of the “Occupational Soundscapes” series introduces the decibel (dB) and its relevance to occupational noise assessment and hearing conservation.  Those with no exposure to the scale and those that have a functional understanding, but lack foundational knowledge, benefit from understanding its mathematical basis.  The characteristics of sound to which it is most-often applied is also presented to continue developing the knowledge required to effectively support a hearing loss prevention program (HLPP).
The Decibel
     Two key characteristics of the decibel scale define its use and contribute greatly to its lack of common understanding.  First, it is a logarithmic scale.  Linear scales are more common, which may lead those unfamiliar with the decibel scale to assume it, too, is linear.
     Second, the decibel scale is a comparative measure, incorporating the ratio of the measured quantity to a reference value.  Absolute scales are more common, potentially leading to another erroneous assumption.
     Making either assumption leads to gross misinterpretation of the information provided by cited values.  Mathematically, the general expression of the decibel scale is:
(all logarithms cited are base 10, log10, unless otherwise specified).  The multiplication factor of 10 converts Bels to decibels.  One Bel is defined as the increase corresponding to a tenfold increase in the ratio of values.  A decibel (dB) is, therefore, one tenth of a Bel.  The nature of the scale yields a dimensionless value that is valid for any system of units.

Sound Parameters
     To use the decibel scale effectively, in the context of occupational soundscapes, the interrelationships of power, intensity, and pressure must be understood.  Differentiating these measures is critical to understanding the true nature of the sound environment under scrutiny.
     Sound power (W), measured in watts (W), is the amount of acoustical energy produced by a sound source per unit time.  It is a characteristic of the source and is, therefore, independent of its location or surroundings.  In this discussion, it is assumed that sounds are generated by point sources, with sound dispersing spherically; variations will be introduced later.
     Sound intensity (I), measured in watts per square meter (W/m^2), is the sound power per unit area.  It is dependent on location, as it accounts for the dispersion of sound energy at a specified radial distance from the source:
The equation reveals that intensity decreases with the square of the distance from the source.  This inverse square law is depicted in Exhibit 1.
     Sound intensity, I, is a vector quantity.  In free-field conditions, however, the lack of obstructions and reflecting surfaces renders the specification of direction moot.  The intensity at a given distance from the source is equal in all directions.
     Sound pressure (P), measured in newtons per square meter (N/m^2) or, equivalently, Pascal (Pa), is the variable air pressure (force per unit area) superimposed on atmospheric pressure.  Propagation of pressure fluctuations as sound waves was introduced in Part 2; root mean square pressure (PRMS) is typically used.  Sound pressure is an effect of sound power generated by a source; it is influenced by the surrounding environment and distance from the source.
     Of the three parameters described, only pressure can be measured directly.  With adequate pressure data, however, it is possible to work backwards to obtain intensity and power values.  To do this, first calculate the RMS pressure of the sound wave.
     Sound intensity is calculated using the following formula:
where P is the RMS pressure (Pa), ρ is the density of air, and c is the speed of sound in air.
     At standard conditions, ρ = 1.2 kg/m^3; though the density of air varies, this approximation provides sufficient accuracy for most purposes.  Likewise, the approximation of c = 343 m/s will typically suffice.
      With the intensity at a known distance from the source, calculating sound power is simple:
where A is the spherical area at distance r (A = 4 Π r^2).
 
Sound Levels
     In the previous section, sound power, intensity, and pressure were discussed in absolute terms.  More often, however, these measures are referenced by their levels, using the decibel scale.  Doing so makes the very wide range of values encountered more manageable.
     The sound power level (LW or PWL) is calculated using the general expression of the decibel scale, rewritten as:
where Wref is the reference power value; Wref = 10^-12 W.
     Likewise, the sound intensity level (LI or SIL) is calculated with the general expression rewritten as:
where Iref is the reference intensity value; Iref = 10^-12 W/m^2.
     Using the expression for LI and the inverse square law, it can be shown that 6 dB of attenuation is attained by doubling the distance from the source.  Choosing an arbitrary value, (I/Iref) = 40, at distance r, we get LI(r) = 10 log (40) = 16 dB.  Doubling the distance increases the area of the hypothetical sphere by a factor of 4 (see Exhibit 1).  With power constant, this increased area reduces intensity by a factor of 4, which, in turn, reduces (I/Iref) by the same factor.  Therefore, for our example, (I/Iref) = 10 at distance 2r and we get LI(2r) = 10 log (10) = 10 dB, a reduction of 6 dB.
     Equating the two expressions for I, above, and rearranging, we get
In this form, it is easy to see that the square of pressure varies with r^2, while power and intensity (i.e. first power) vary with r^2.  Thus the general expression is rewritten for the sound pressure level (LP or SPL) as:
Pref is the reference pressure value; Pref = 2 x 10-5 N/m^2 = 20 μPa, corresponding to the threshold of human hearing at 1000 Hz.  Exhibit 2 provides examples of decibel scale levels and corresponding absolute values of sound power, intensity, and pressure.  The following should be noted in the table:
  • Each of the reference values correspond to 0 dB – when the ratio = 1, log (1) = 0.
  • Power and intensity are numerically equal at equal dB levels – numerically equal reference values are used.
  • All decimal places are shown, but small values of power and intensity are typically expressed in scientific notation (e.g. 10^-12) and small values of pressure are typically expressed in μPa.
  • Values in the table range from the threshold of human hearing to far beyond the threshold of pain (the range of human hearing will be discussed further in a future installment).
     Sound power and intensity levels are useful for acoustics projects – designing sound systems, venues, etc. – but sound pressure levels are most useful in quantifying occupational environments and supporting hearing conservation programs.  Examples of typical sound pressure levels encountered in commercial, recreational, and other settings are shown in Exhibit 3.  The “Noise Navigator,” an extensive database compiled and published by 3M Corporation, is available online.  In it, measurements of numerous sound levels are recorded, providing more useful data for research and planning purposes.

     Thus far, sounds have been treated as if generated by a singular point source in free-field conditions (no interference in spherical transmission).  Realistic soundscapes, however, are comprised of multiple complex sounds from various sources in environments where obstructions and reflective surfaces are ubiquitous.  In the next installment, the “Occupational Soundscapes” series begins to tackle the challenges of real-world conditions, presenting methods for assessing the effects of multiple simultaneous sounds on sound pressure levels.

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] An Introduction to Acoustics.  Robert H. Randall.  Addison-Wesley; 1951.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Noise Navigator Sound Level Database, v1.8.” Elliot H. Berger, Rick Neitzel, and Cynthia A. Kladden.  3M Personal Safety Division; June 26, 2015.
[Link] “Sound Intensity.”  Brüel & Kjaer; Septermber 1993.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 2:  Mechanics of Sound and the Human Ear]]>Wed, 09 Aug 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-2-mechanics-of-sound-and-the-human-ear     A rudimentary understanding of the physics of sound and the basic functions of the human ear is necessary to appreciate the significance of test results, exposure limits, and other elements of a hearing loss prevention program (HLPP).  Without this background, data gathered in support of hearing conservation have little meaning and effective protections cannot be developed and implemented.
     This installment of the “Occupational Soundscapes” series provides readers an introduction to the generation and propagation of sound and the structure and function of the human ear; it is not an exhaustive treatise on either subject.  Rather, it aims to provide a foundation of knowledge – a refresher, for many – on which future installments of the series build, without burdening readers with extraneous or potentially confusing detail.
Sound Generation and Propagation
     Sound can travel through solid, liquid, and gaseous media.  As our primary interest is in human hearing, this presentation focuses on sound propagation through air.  It should be noted, however, that vibrations in other media can be transferred to surrounding air and, therefore, ultimately perceptible by the human ear.  In fact, structure-borne noise is a prominent component of many occupational soundscapes.
     In air, sound is propagated via longitudinal pressure waves.  The movement of particles in a longitudinal wave is parallel to the wave’s direction of travel.  In contrast, particles move perpendicular to the travel direction of a transverse wave; a common example is a ripple in water.  A pressure wave consists of alternating regions of high and low pressure.  These are known as compressions and rarefactions, respectively, as the air pressure oscillates above and below ambient atmospheric pressure as portrayed in Exhibit 1.
     Sound waves are most-often referenced by their amplitude (A) and frequency (f).  A sound wave’s amplitude is the maximum pressure deviation from the ambient (μPa).  It is related to the perception of “loudness” of the sound.
     Instantaneous sound pressure is often a less useful measure than one that is time-based, such as an average.  However, the average of a sine wave’s amplitude is zero and, thus, unhelpful.  For a metric comparable across time and events, the root mean square (RMS) sound pressure (PRMS) is used.  To calculate PRMS:
(1) Consider the sound pressure waveform over a specified time period; the period of time considered is important for comparison of sound environments or events.
(2) Square (multiply by itself) the waveform (i.e. pressures) [P^2].
(3) Average the pressure-squared waveform (mean pressure squared) [P^2avg].
(4) Take the square root of the mean pressure squared (RMS pressure) [PRMS].
The steps of this process are represented graphically in Exhibit 2 and mathematically by the equation:
Squaring the pressures ensures that RMS values are always positive, simplifying use and comparison.
     A wave’s frequency is the number of complete wave cycles to pass a fixed point each second (Hz or cycles per second).  Frequency is related to the perception of a sound’s pitch.  A wave’s period (T), the time required for one complete wave cycle to pass a fixed point, is simply the inverse of its frequency:  T = 1/f (s).
     The wavelength (λ), the length of one complete wave cycle, is related to frequency and the speed of sound (c):  λ = c/f (m).  Though the speed of sound in air is influenced by temperature and density (i.e. elevation), an approximation of c = 343 m/s (1125 ft/s) is often used in lieu of calculating a more precise value.
 
     Speech is generated by forcing air through the larynx.  Movement of the vocal cords creates pressure fluctuations that manifest as complex sound waves.  These complex waves include carriers and modulation superimposed upon them.  Timing of modulation differentiates similar sounds, such as the letters “b” and “p;” therefore, resolution of these timing differences in the auditory system is integral to speech intelligibility.

     Maintaining speech communication abilities is paramount to a hearing loss prevention program (HLPP).  As such, understanding typical characteristics of voiced sounds is critical to its success.  Exhibit 3 shows the average (dark line) and range (shaded region) of sound pressures created by a small, homogeneous sample of subjects (seven adult males) reciting the same nonsensical sentence.  Though unrepresentative of the diversity of the broader population, the results are indicative of the variability that can be expected in a wider study.  A more-generalized data set is depicted in Exhibit 4, suggesting that the most-critical speech frequencies lie in the range of 170 – 4000 Hz (dB scales and the relation to speech communication will be explored in a future installment of the series).
     In addition to the voiced sounds of “normal” speech, humans generate unvoiced, or breath, sounds.  These occur when air is passed through the “vocal equipment” (i.e larynx, mouth) without activating the vocal cords.  Breath sounds, acting as low-energy carriers, make whispering possible.
 
     All sounds in an environment – wanted and unwanted – impinge on occupants indiscriminately.  It is up to the auditory system of each occupant to receive, process, resolve, differentiate, locate, and interpret these sounds collectively and/or individually as circumstances dictate.  Much of this work is performed by a sophisticated organ that often garners little attention:  the seemingly underappreciated ear.
 
Structure and Function of the Human Ear
     Exhibit 5 provides a pictorial representation of the ear’s complexity; a detailed discussion of each component is impractical in this introductory presentation.  Instead, a brief description of some critical components and their roles in the perception of sound is offered.  It won’t make “experts” of readers, but will provide the basic understanding needed to support hearing conservation efforts.
     Hearing – the perception of sound – takes place in three “stages” corresponding to the three regions of the ear:  outer, middle, and inner.  Common use of the word “ear” often refers only to the outer ear, the region highlighted in Exhibit 6.  Many times, it is intended to reference only the visible, cartilaginous portion called the pinna or auricle.  The pinna is most famous for adornment with jewelry and being the part of a misbehaving child pulled by a TV sitcom mom.
     Sound waves in the environment impinge upon the outer ear, where the pinna helps direct them into the external auditory canal, or simply ear canal.  The structure of the ear canal causes it to resonate, typically, in the range of 3 – 4 kHz, providing an amplification effect for sounds at or near its resonant frequency.
     The terminus of the outer ear is the tympanic membrane, commonly known as the eardrum.  The variable pressure of the impinging sound waves causes this diaphragm-like structure to flex inward and outward in response.
 
     In the middle ear, highlighted in Exhibit 7, the vibrational energy of the flexing eardrum is transmitted to another membrane in the inner ear via a linkage of three small bones, collectively called the ossicles or ossicular chain.  The malleus is attached to the eardrum and the stapes is attached to a membrane in the oval window of the cochlea.  Between these two lies the incus, the largest of the three bones.  The malleus, incus, and stapes are commonly known as the hammer, anvil, and stirrup, respectively.
     The configuration of the ossicular chain provides approximately a 3:1 mechanical advantage.  In conjunction with the relative sizes of the eardrum and oval window, the middle ear provides an overall mechanical advantage of approximately 15:1.  The ability to hear very soft sounds is attributed to the amplification effect produced in the middle ear.
     The Eustachian tube connects the middle ear to the nasal cavity, enabling pressure equalization with the surrounding atmosphere.  Blockage of the Eustachian tube, due to infection, for example, results in pressure deviations that can reduce hearing sensitivity, potentially to the extent of deafness.
     Two small muscles, the stapedius and the tensor tympanic, serve a protective function against very loud sounds.  These muscles act on the bones of the ossicular chain, changing its transmission characteristics to reduce the energy transmitted to the inner ear.  This protection mechanism is only available for sustained sounds, as the reflexive contraction of these muscles, known as the acoustic reflex, does not engage rapidly enough to attenuate sudden bursts of sound, such as explosions or gunshots.
 
     The inner ear, highlighted in Exhibit 8, is comprised primarily of the cochlea.  The cochlea is an extraordinary organ in its own right; its presentation here is, necessarily, an extreme simplification.  Many of its components and functional details will not be discussed, as a descriptive overview is of greater practical value with respect to hearing conservation.
     The motion of the stapes (stirrup) in the oval window induces pressure fluctuations in the fluid in the chambers of the cochlea.  The mechanical advantage provided by the middle ear serves to overcome the impedance mismatch between the air in the outer and middle ear and the liquid in the inner ear.  As mentioned previously, this maintains sensitivity to low-intensity sounds.
     The fluid movement, in turn, causes tiny hair cells in the cochlea to bend in relation to the sound energy transmitted.  These hairs are selectively sensitive to frequency; the extent of bending is proportional to the loudness of the sound.  It is this bending of hair cells that is translated into electrical signals that are sent to the brain for interpretation.  Damaging these sensitive hairs leads to reduced hearing sensitivity.  They are also nonregenerative; therefore, hearing loss caused by damaging these hairs is permanent and irrecoverable.  Though other mechanisms of damage exist, NIHL is a prominent and important one.
     The basilar membrane, separating the chambers of the cochlea, is also selectively sensitive to frequency, due to its varying mass and stiffness.  This results in the tonotopic organization of the cochlea, as depicted in Exhibit 9.  The highest frequencies (~ 20kHz) are detected near the basil end of the membrane (i.e. nearest the oval window); sensitivity shifts to progressively lower frequencies along the cochlear spiral.  Sensitivity to frequencies above ~ 2 kHz, including critical speech frequencies, is concentrated in the first 3/4 “coil” of the cochlea.  The range of human hearing and the critical speech frequencies will be discussed further in a future installment.
     The semicircular canals are highly recognizable, projecting from the cochlea’s distinct snail-like shape, but they play no significant role in hearing.  They are, however, integral to the critical function of maintaining balance which enables humans to walk upright.  Sharing a fluid supply with the cochlea, issues with hearing and balance can be correlated during a traumatic event.
     The introduction to Part 1 of the series called attention to several parallels between occupational soundscapes and thermal work environments.  The fluids contained in the cochlea may demonstrate a more-direct link.  There are two key fluids contained in chambers of the cochlea.  One, perilymph, is sodium-rich and the other, endolymph, is potassium-rich.  As discussed in the “Thermal Work Environments” series (Part 3), depletion of these electrolytes (salts) can be caused by profuse sweating and/or ingesting large quantities of water without balanced replenishment.  In addition to causing heat cramps and other afflictions, it seems heat stress could affect your hearing!

     There is a great deal more detail available from various sources to explain the mechanics of hearing, particularly the inner workings of the cochlea.  It is a fascinatingly complex organ; intrigued readers are encouraged to consult the references at the end of this post, as well as medical texts, to learn more.  Despite the requisite simplification of this presentation, sufficient information has been included to enable readers to continue on this journey of sound exploration in pursuit of the ultimate goal: effective hearing conservation practices.

     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Occupational Soundscapes” volumes on “The Third Degree,” see Part 1: An Introduction to Noise-Induced Hearing Loss (26Jul2023).
 
References
[Link] ”29 CFR 1910.95 - Occupational noise exposure.’  OSHA.
[Link] “Noise Control Design Guide.” Owens Corning; 2004.
[Link] Engineering Noise Control – Theory and Practice, 4ed.  David A. Bies and Colin H. Hansen.  Taylor & Francis; 2009.
[Link] The Noise Manual, 6ed.  D.K. Meinke, E.H. Berger, R.L. Neitzel, D.P. Driscoll, and K. Bright, eds.  The American Industrial Hygiene Association (AIHA); 2022.
[Link] “Hearing Protection.”  Laborers-AGC Education and Training Fund; July 2000.
[Link] Noise Control in Industry – A Practical Guide.  Nicholas P. Cheremisinoff.  Noyes Publications, 1996.
[Link] “Noise – Measurement And Its Effects.”  Student Manual, Occupational Hygiene Training Association; January 2009.
[Link] An Introduction to Acoustics.  Robert H. Randall.  Addison-Wesley; 1951.
[Link] “How Hearing Works.”  Hearing Industries Association; 2023.
[Link] “OSHA Technical Manual (OTM) - Section III: Chapter 5 - Noise.”  Occupational Safety and Health Administration; July 6, 2022.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Occupational Soundscapes – Part 1:  An Introduction to Noise-Induced Hearing Loss]]>Wed, 26 Jul 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/occupational-soundscapes-part-1-an-introduction-to-noise-induced-hearing-loss     Exposure to excessive noise in the workplace can have profound effects, both immediate and long-term.  Some consequences are obvious, while others may surprise those that have not studied the topic.
     Some industries, such as mining and construction, are subject to regulations published specifically for them.  This series presents information, including regulatory controls, that is broadly applicable to manufacturing and service industries.
     Several parallels exist between exposure to noise and heat stress (see the “Thermal Work Environments” series).  These include the relevance of durations of exposure and recovery, the manifestation of cognitive, as well as physical, effects on workers, and the importance of monitoring exposure and risk factors.
     To take advantage of these parallels, the “Occupational Soundscapes” series follows a path similar to that taken in the “Thermal Work Environments” series.  Terminology, physiological implications, measurement, and guidance for managing the risks are each discussed in turn.
 
Terms in Use
     The title “Occupational Soundscapes” was chosen to maintain the focus of the series on two important aspects.  First, “occupational” reminds readers that the subject matter context is the workplace.  Managing sound and preventing occupational noise-induced hearing loss (NIHL) – hearing loss caused by workplace noise – is the objective of the series.  This differentiates occupational hearing loss from other causes.  Other forms of hearing loss can occur in addition to NIHL; these include:
  • presbycusis – naturally-occuring due to aging.
  • sociocusis – caused by recreational or non-occupational activities, such as music, aviation, motorsports, or arena sports.
  • nosocusis – caused by environmental factors such as exposure to chemicals, behaviors such as drug use, or underlying health conditions such as hypertension.
These types of hearing loss are presented to provide clarity to occupational causes, but will not be discussed in detail.
     The second term of the title, “soundscapes,” serves to remind readers that workplaces are filled with a combination of sounds; some are desired, others are detrimental to working conditions.  Each contribution to the soundscape has a unique source and set of parameters.
     Much of this series focuses on the reduction, control, and protection from noise – the unwanted portion of the soundscape – but readers should not lose sight of the wanted sound.  One very important reason to control noise is to maintain accessibility of desired sounds.  Speech communication is of particular importance and is the primary focus of audiometric testing and industrial noise-control regulation.
     In its “Criteria for a Recommended Standard – Occupational Noise Exposure, Revised Criteria” (1998), NIOSH declares that its focus is on prevention of hearing loss rather than conservation of hearing.  This emphatic declaration is somewhat bizarre, as this is a distinction without a difference.  The terms are functionally equivalent, particularly in practical matters, to which “The Third Degree” is committed.  Readers will be spared a detailed explanation of why this is true; suffice to say that references to hearing loss prevention, hearing conservation, and hearing preservation are considered interchangeable.
 
     While paralleling the information presentation of the “Thermal Work Environments” series, the objectives pursued in this series will also mimic those of its predecessor series.  In brief, each installment is limited in scale and scope to be palatable to busy practitioners, easily referenced, edited, or expanded as future development requires it.  To further promote a holistic approach to job design, the two series should be read as companion pieces.  Side-by-side review of thermal and aural requirements of a workplace may reveal complementary or synergistic solutions, increasing the efficiency of industrial hygiene improvement efforts.
 
     Links to the entire series are provided at the end of this post for easy reference.
 
     For additional guidance or assistance with Safety, Health, and Environmental (SHE) issues, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] “Criteria for a Recommended Standard - Occupational Noise Exposure, Revised Criteria 1998.”  Publication No. 98-126, NIOSH, June 1998.
[Link] ”29 CFR 1910.95 - Occupational noise exposure.’  OSHA.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
 
Directory of “Occupational Soundscapes” entries on “The Third Degree.”
Part 1:  An Introduction to Noise-Induced Hearing Loss (26Jul2023)
Part 2:  Mechanics of Sound and the Human Ear (9Aug2023)
Part 3:  The Decibel Scale (23Aug2023)
Part 4:  Sound Math (6Sep2023)
]]>
<![CDATA[Thermal Work Environments – Part 5:  Managing Conditions in Hot Environments]]>Wed, 12 Jul 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-5-managing-conditions-in-hot-environments     Safeguarding the health and well-being of employees is among the critical functions of management.  In hot workplaces, monitoring environmental conditions and providing adequate protection comprise a significant share of these responsibilities.  The details of these efforts are often documented and formalized in a heat illness prevention program.
     An effective heat illness prevention program consists of several components, including the measure(s) used for environmental assessment, exposure limits or threshold values, policies defining the response to a limit or threshold being reached, content and schedule of required training for workers and managers, and the processes used to collect and review data and modify the program.  Other information may be added, particularly as the program matures.  Though it is nominally a prevention program, response procedures, such as the administration of first aid, should also be included; the program should not be assumed to be infallible.
     In this installment of the “Thermal Work Environments” series, the components of heat stress hygiene and various control mechanisms are introduced.  Combined with the types of information mentioned above, an outline of a heat illness prevention program emerges.  This outline can be referenced or customized to create a program meeting the needs of a specific organization or work site.
     The content of a heat illness prevention program is presented in five (5) sections:
  • Training
  • Hazard Assessment
  • Controls
  • Monitoring
  • Response Plans
A comprehensive review of each would be unwieldy in this format.  Instead, an overview of the information is provided as an introduction and guide to further inquiry when one begins to develop a program for his/her team.

Training
     Every person that works in or has responsibility for a hot workplace should be trained on the ramifications of excess heat.  Information relevant to the following four sections is included in an effective training program.  Examples of important topics for all team members include:
  • basics of human biometeorology and heat balance,
  • environmental, personal, and behavioral risk factors,
  • methods used to monitor conditions,
  • controls in place to prevent heat illness,
  • signs and symptoms of heat illness, and
  • first aid and emergency response procedures.
Training of supervisors and team leaders should emphasize proper use of controls, signs and symptoms, and appropriate responses to heat illness and failure of control mechanisms.
     A complete training plan includes the content of the training and a schedule for delivery.  It may be best to distribute a large amount of information among multiple modules rather than share it in a single, long presentation.  Refresher courses of reduced duration and intensity should also be planned to combat complacency and to update information as needed.  Refreshers are particularly helpful when dangerous conditions exist intermittently or are seasonal.
 
Hazard Assessment
     An initial hazard assessment consists of identifying the elements of job design (see Part 1) that are heat-related.  These include environmental factors, such as:
  • atmospheric conditions (e.g. temperature, humidity, sun exposure),
  • air movement (natural or forced), and
  • proximity to heat-generating processes or equipment.
Also included are job-specific attributes, such as:
  • intensity of work (i.e. strenuousness and rate),
  • personal protective equipment (PPE) and other gear required, and
  • availability of potable water and a cool recovery area.
Other relevant factors may also be identified.  A compound effect could be discovered, for example, between concentration required for task performance and an increase in heat stress due to resultant anxiety.
     Using the information collected in the hazard assessment, a risk profile can be created for each job.  The risk profile is then used to prioritize the development of controls and modifications to the job design.
 
Controls
     Similar to that for quality [see “The War on Error – Vol. II:  Poka Yoke (What Is and Is Not)” (15Jul2020)], there is an implied hierarchy of controls used to manage heat-related effects on workers.  Engineering controls modify the tasks performed or the surrounding conditions, while administrative controls guide workers’ behavior to reduce heat stress.  Finally, personal protective equipment (PPE) is used to manage heat stress that could not otherwise be reduced.  PPE is often the first protection implemented and is used until more-effective controls are developed.
     A comprehensive heat stress control plan is developed by considering each term in the heat balance equation (see Part 2).  Examples of engineering controls include:
  • To reduce metabolic heat generation, M, provide lift assists, material transport carts, or other physical aids to limit workers’ exertion.
  • To reduce radiative heat load, R, install shields between heat sources (e.g. furnaces or other hot equipment) and workers, just as an umbrella is used to block direct sunlight.
  • Use fans to increase evaporative cooling, E, when air temperature is below 95° F (35° C).
  • Reduce air temperature with water-mist systems if relative humidity (RH) is below 50% and general air conditioning is not practical.
     Administrative control examples include:
  • Establish policies that limit work during periods of excessive heat; thresholds can be based on Heat Index (HI), Wet Bulb Globe Temperature (WBGT), or other index.  The American Conference of Governmental Industrial Hygienists (ACGIH) regularly publishes threshold limit values (TLVs) based on WBGT with adjustments for clothing and work/rest cycles.  ACGIH TLVs often serve as the basis for standards and guidelines developed by other organizations and government agencies.
  • Reduce M by increasing periods of rest in the work cycle.
  • Implement an acclimatization program for new and returning workers that allows them to develop “resistance” to heat stress.
  • Encourage proper hydration; ensure availability of cool potable water.
     Heat-related PPE examples include:
  • Reflective clothing to reduce radiative heat load, R.
  • A vest cooled with ice, forced air, or water increases conductive, K, and/or convective, C, heat loss.
  • Bandana, hat or similar item that can be wetted to enhance evaporative cooling, E, particularly from the head and neck.
  • Hydration backpack.
     Many controls are used in conjunction to achieve maximum effect.  Tradeoffs must be considered to ensure that the chosen combination of controls is the most-effective.  For example, cooling with a water-mist system increases humidity; if it begins to inhibit evaporation from skin, its use may be inadvisable.
 
Monitoring
     Monitoring is a multifaceted activity and responsibility.  In addition to measuring environmental variables, the effectiveness of controls and the well-being of workers must be continually assessed.  A monitoring plan includes descriptions of the methods used to accomplish each.
     Measurement of environmental variables is the subject of Part 4 of this series.  As discussed in that installment, multiple indices may be used to inform decisions regarding work cycle modifications or stoppages.  Those used in popular meteorology, such as Heat Index (HI), are often insufficient to properly characterize workplace conditions; however, they can be useful as early warnings that additional precautions may be needed to protect workers during particularly dangerous periods.  See “Heat Watch vs. Warning” for descriptions of alerts that the National Weather Service (NWS) issues when dangerous temperatures are forecast.
     After controls are implemented, they must be monitored for proper use and continued effectiveness.  This should be done on an ongoing basis, though a formal report may be issued only at specified intervals (e.g. quarterly) or during specific events (e.g. modification of a control).  Verification test procedures should be included in the monitoring plan to maintain consistency of tests and efficacy of controls.
     Monitoring the well-being of workers is a responsibility shared by a worker’s team and medical professionals.  Prior to working in a hot environment, each worker should be evaluated on his/her overall health and underlying risk factors for heat illness.  An established baseline facilitates monitoring a worker’s condition over time, including the effectiveness of acclimatization procedures and behavioral changes.
     Suggestions for behavioral changes, or “lifestyle choices,” can be made to reduce a worker’s risk; these include diet, exercise, consumption of alcohol or other substances, and other activities.  Recommendations to an employer regarding one’s fitness for certain duties, for example, must be made in such a way that protects both safety and privacy.  Heat-related issues may be best addressed as one component of a holistic wellness program such as those established by partnerships between employers, insurers, and healthcare providers.
 
Response Plans
     There are three (3) response plans that should be included in a heat illness prevention program.  Somewhat ironically, two of them are concerned with heat illness that was not prevented.
     The first response plan details the provisioning of first aid and subsequent medical care when needed.  Refer to Part 3 for an introduction to heat illnesses and first aid.
     The second outlines the investigation required when a serious heat illness or heat-related injury or accident occurs.  The questions it must answer include:
  • Were defined controls functioning and in proper use?
  • Had the individual(s) involved received medical screening and been cleared for work?
  • Had recommendations from prescreens been followed by individual(s) and the organization?
  • Had the individual(s) been properly acclimatized?
  • Were special circumstances involved (e.g. heat advisory, other emergency situation, etc.)?
The investigation is intended to reveal necessary modifications to the program to prevent future heat illness or heat-related injury.
     The final response plan needed defines the review process for the heat illness prevention program.  This includes the review frequency, events that trigger additional scrutiny and revision, and required approvals.
 
 
     Currently, management of hot work environments is governed by the “General Duty Clause” of the Occupational Safety and Health Act of 1970.  The General Duty Clause provides umbrella protections for hazards that are not explicitly detailed elsewhere in the regulations.  It is a generic statement of intent that provides no specific guidance for assessment of hazards or management of risks.
     In 2021, OSHA issued an “advance notice of proposed rulemaking” (ANPRM) to address this gap in workplace safety regulations.  A finalized standard, added to the Code of Federal Regulations (CFR), will add specific enforcement responsibilities to OSHA’s current role of education and “soft” guidance on heat-related issues.
     That an OSHA standard will reduce heat-related illness and injury is a reasonable expectation.  However, it must be recognized that it, too, is imperfect.  No standard or guideline can account for every person’s unique experience of his/her environment; therefore, an individual’s perceptions and expressions of his/her condition (i.e. comfort and well-being) should not be ignored.  A culture of autonomy, or “self-determination,” where workers are self-paced, or retain other responsibility for heat stress hygiene, is one of the most powerful tools for safety and health management imaginable.
 
 
     For additional guidance or assistance with complying with OSHA regulations, developing a heat illness prevention program, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “NIOSH Criteria for a Recommended Standard Occupational Exposure to Heat and Hot Environments.”  Brenda Jacklitsch, et al.  National Institute for Occupational Safety and Health (Publication 2016-106); February 2016.
[Link] “Threshold Limit Values for Chemical Substances and Physical Agents.”  American Conference of Governmental Industrial Hygienists (ACGIH); latest edition.
[Link] “National Emphasis Program – Outdoor and Indoor Heat-Related Hazards.”  Occupational Safety and Health Administration (OSHA); April 8, 2022.
[Link] “Ability to Discriminate Between Sustainable and Unsustainable Heat Stress Exposures—Part 1:  WBGT Exposure Limits.”  Ximena P. Garzon-Villalba, et al.  Annals of Work Exposures and Health;  June 8, 2017.
[Link] “Ability to Discriminate Between Sustainable and Unsustainable Heat Stress Exposures—Part 2:  Physiological Indicators.”  Ximena P. Garzon-Villalba, et al.  Annals of Work Exposures and Health;  June 8, 2017.
[Link] “The Thermal Work Limit Is a Simple Reliable Heat Index for the Protection of Workers in Thermally Stressful Environments.”  Veronica S. Miller and Graham P. Bates.  The Annals of Occupational Hygiene; August 2007.
[Link] “Thermal Work Limit.”  Wikipedia.
[Link] “The Limitations of WBGT Index for Application in Industries: A Systematic Review.”  Farideh Golbabaei, et al.  International Journal of Occupational Hygiene; December 2021.
[Link] “Evaluation of Occupational Exposure Limits for Heat Stress in Outdoor Workers — United States, 2011–2016.”  Aaron W. Tustin, MD, et al.  Morbidity and Mortality Weekly Report (MMWR).  Centers for Disease Control and Prevention; July 6, 2018.
[Link] “Occupational Heat Exposure. Part 2: The measurement of heat exposure (stress and strain) in the occupational environment.”  Darren Joubert and Graham Bates.  Occupational Health Southern Africa Journal; September/October 2007.
[Link] “Heat Stress:  Understanding factors and measures helps SH&E professionals take a proactive management approach.”  Stephanie Helgerman McKinnon and Regina L. Utley.  Professional Safety; April 2005.
[Link] “The Heat Death Line: Proposed Heat Index Alert Threshold for Preventing Heat-Related Fatalities in the Civilian Workforce.”  Zaw Maung and Aaron W. Tustin.  NEW SOLUTIONS: A Journal of Environmental and Occupational Health Policy; June 2020.
[Link] “Loss of Heat Acclimation and Time to Re-establish Acclimation.”  Candi D. Ashley, John Ferron, and Thomas E. Bernard.  Journal of Occupational and Environmental Hygiene; April 2015.
 

Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 4:  A Measure of Comfort in Hot Environments]]>Wed, 28 Jun 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-4-a-measure-of-comfort-in-hot-environments     Since the early 20th century, numerous methods, instruments, and models have been developed to assess hot environments in absolute and relative terms.  Many people are most familiar with the “feels like” temperature cited in local weather reports, though its method of determination can also vary.  Index calculations vary in complexity and the number of included variables.
     Despite the ever-improving accuracy and precision of instrumentation, heat indices remain models, or approximations, of the effects of hot environments on comfort and performance.  The models may also be applicable only in a narrow range of conditions.  When indices are routinely cited by confident “experts,” without qualifying information, those in the audience may attribute greater value to them than is warranted.
     Incorporating the range of possible environmental conditions and human variability requires an extremely complex model, rendering its use in highly-dynamic workplaces infeasible.  Though imperfect, there are models and methods that can be practically implemented for the protection of workers in hot environments.
     A comprehensive presentation of heat stress modeling is far beyond the scope of this series.  Instead, this installment presents two types of indices:
(1) indices that are familiar to most, such as those used in weather reports, and
(2) practical assessments of hot work places; i.e. indices derived from noninvasive measurement of environmental conditions.
 
Popular Meteorology
     Short-term weather forecasts are concerned, largely, with predicting levels of comfort.  Forecasters use temperature indices to convey what the combination of conditions “feels like” relative to a reference set of conditions (e.g. dry bulb temperature at 50% relative humidity).  Methods of calculation and individuals’ perceptions vary; thus, temperature indices are generally more reliable as temporal comparisons than geographical ones.
     The “apparent temperature” (AT) exemplifies the temporal utility and geographical futility of such indices.  AT has been defined in various ways, hindering meaningful aggregation of data.  This slip into generic use of the term also precludes a detailed discussion here; however, AT can still be a valuable reference in some applications.  If approximations are sufficient, a simple look-up table, such as that in Exhibit 1, can be used for rapid reference.
     As seen in the table, this formulation of apparent temperature accounts only for ambient temperature and relative humidity (RH).  Readers unfamiliar with meteorological instrumentation or terminology can interpret “dry bulb temperature,” Tdb, as “reading from standard thermometer.”
 
     Heat Index (HI), used by the National Weather Service (NWS), incorporates several variables in the calculation of apparent, or perceived, temperature.  Derived by multiple regression analysis (far beyond the scope of this series), calculation of HI has been simplified by choosing constant values for all variables except dry bulb temperature (Tdb) and relative humidity (RH).  To maintain brevity in this presentation, the selection of these values will not be discussed; practical application is not hampered by this omission.  Simplifications notwithstanding, calculation of Heat Index remains a multi-step process.
     First, the “simple” Heat Index equation is used:
     HI1 = 0.5 { Tdb + 61.0 + [(Tdb – 68.0) * 1.2] + (0.094 * RH)},
where Tdb is measured in degrees Fahrenheit (° F) and RH is given in percent (%).  If HI1 ≥ 80° F, the “full” Heat Index equation is used and required adjustments are applied.
     The full Heat Index equation incorporates the constant values selected for constituent variables.  Using temperatures measured on the Fahrenheit scale (subscript “F”),
     HIF = -42.379 + 2.04901523 * Tdb + 10.14333127 * RH – 0.22475541 * Tdb * RH – 0.00683783 * Tdb^2– 0.05481717 * RH^2 + 0.00122874 * Tdb^2 * RH + 0.00085282 * Tdb * RH^2 – 0.00000199 * Tdb^2 * RH^2 .
Using temperatures measured on the Celsius scale (subscript “C”),
     HIC = -8.78469476 + 64.4557644 * Tdb + 93.54195356 * RH – 233.78568 * Tdb * RH – 19.6929504 * Tdb^2 – 26.2797244 * RH^2 + 141.550848 * Tdb^2 * RH + 46.42944 * Tdb * RH^2 – 9.16992 * Tdb^2 * RH^2 .
     Under certain conditions, an adjustment to the calculated HI is needed.  When RH < 13% and 80 < Tdb (° F) < 112 [26.7 < Tdb (° C) < 44.5], the following adjustment factor is added to the calculated HI:
     Adj1F = -{[(13 – RH)/4] * SQRT([17 - | Tdb – 95|]/17)} or
     Adj1C = -{[(13 – RH)/7.2] * SQRT([17 - |1.8 *  Tdb – 63|]/17)} .
When RH > 85% and 80 < Tdb (° F) < 87 [26.7 < Tdb (° C) < 30.5], the following adjustment factor is added to the calculated HI:
     Adj2F = [(RH – 85)/10] * [(87 – Tdb)/5] or
     Adj2C = 0.02 * (RH – 85) * (55 – 1.8 * Tdb)/1.8 .
     Limitations of the Heat Index equation extend beyond complexity of computation.  For HI < 80° F or 26° C, the full equation loses validity; the simple formulation is more useful.  Its derivation via multiple regression yields an error of ± 1.3° F (0.7° C), though this accuracy is usually sufficient for weather forecasts, as the geographical variation may exceed this amount.  Exposure to direct sunlight (insolation) can increase HI values up to 15° F (8° C), though the actual amount in given conditions is indeterminate in this model.  Constants chosen for constituent variables may also limit utility of HI in real conditions, should they vary significantly from assumptions.
     The preceding presentation of HI was intended to develop some appreciation for the potential complexity and limitations of temperature indices.  In reality, practical application requires none of this.  NWS provides a simple interface to input Tdb and RH values and quickly obtain HI values.  It can be found at www.wpc.ncep.noaa.gov/html/heatindex.shtml.  Links to other information are also provided for interested readers.
 
     The Canadian Meteorological Service uses humidex to express apparent temperatures.  This “humidity index,” like HI, incorporates temperature and humidity; it is calculated as follows:
     Humidex = Tdb + 0.5555 * {6.11 * e^(5417.7530 * [1/273.16 – 1/ (Tdp + 273)]) - 10},
where Tdp is the dewpoint temperature (° C).  Alternatively,
     Humidex = Tdb + 0.5555 * (Pv – 10),
where Pv is the vapor pressure (hPa).  If vapor pressure data is available, calculation of humidex is obviously simpler; however, dewpoint temperatures are likely more readily attainable.
     Like their counterparts in the US, the Canadians save everyone the trouble of computing the index.  A humidex calculator is provided at weather.gc.ca/windchill/wind_chill_e.html.
 
Advanced Measures
     Presenting a detailed review of the numerous temperature indices and models would be contradictory to the objectives declared in Part 1.  The following discussion is limited to those with practical application to hot workplace environments.
     The most ubiquitous index is the Wet Bulb Globe Temperature (WBGT).  This index combines dry bulb (Tdb), wet bulb (Twb), and black globe (Tg) temperatures to compute an apparent temperature.  The component measurements represent ambient temperature, evaporative cooling, and radiant heat transfer, respectively.  The combination of measurements provides a better approximation of the effects of environmental conditions on the human body than is available from the meteorological indices discussed in the previous section.
     For outdoor environments with a solar load component,
     WBGTout = (0.7 * Twb) + (0.1 * Tdb) + (0.2 * Tg).
For environments with no solar load component (e.g. indoors), the calculation is reduced to
     WBGTin = (0.7 * Twb) + (0.3 * Tg).
Estimating WBGT, with adjustments for air movement and clothing, can be accomplished using the table and procedure described in Exhibit 2.
     For best results, an instrument that complies with a broadly-accepted standard, such as ISO 7243, should be used to maintain consistency and comparability of data.  The standard defines characteristics of the globe, proper bulb wetting, and other information needed to use the instrument effectively.
     WBGT has been criticized for being “overly conservative.”  That is, restrictions placed on work rates and schedules based on WBGT limits have been deemed more protective than necessary to maintain worker health to the detriment of productivity.  Such criticisms of WBGT have led some to advocate for its use only as a screening tool.  Research, judgment, and multiple indices can be used to make this determination for specific circumstances and establish appropriate policies and procedures.
 
     Wet Globe Temperature (WGT) is comparable to WBGT.  It uses a copper globe covered with a wetted black cloth, called a Botsball, in place of the separate instruments of the WBGT apparatus.  A conversion has been derived to obtain WBGT from Botsball measurements:
     WBGT = 1.044 * WGT – 0.187° C.
 
     The Thermal Work Limit (TWL) has gained acceptance in some industries, such as mining, and could become common in others.  In particular, it is useful in outdoor environments with a significant contribution to heat stress attributable to radiant sources.  Advocates claim that TWL addresses the deficiencies of WBGT and is, therefore, a more-reliable indicator of heat stress.
     The TWL is the maximum metabolic rate (W/m^2) that can be sustained while maintaining a safe core temperature [< 100.4° F (38° C)] and sweat rate [< 1.2 kg/hr (42 oz/hr)].  It is determined using five environmental factors:  Tdb, Twb, Tg, wind speed (va), and atmospheric pressure (Pa).  TWL is based on assumptions that individuals are euhydrated, clothed, and acclimated to the conditions.
     A series of calculations is needed to determine TWL.  Rather than derail this discussion with a lengthy presentation of equations, readers are encouraged to familiarize themselves by using a calculator, such as that provided by Cornett’s Corner.  Preliminary calculations can be made with estimated metabolic rates; a guideline is provided in Exhibit 3.
     Body surface area (Ab) is determined by the following calculation:
     Ab (m^2) = [weight (kg)]^0.425 * [height (cm)]^0.725 * 71.84 .
Finally, divide the metabolic rate by Ab and compare the result to TWL.  If TWL is exceeded, additional breaks in the work cycle are needed to maintain worker well-being.
 
     Other measures of heat stress and related risk are concerned with sweating and hydration.  Skin wittedness, predicted sweat loss, and required sweating determinations are more academic than practical.  Weight loss due to sweating of less than 1.5% of body weight indicates adequate hydration, but isolating the cause of weight variation in a workplace is not straightforward.
     The specific gravity of one’s urine is a more-reliable indication of a person’s hydration, but, again, collecting this data is not feasible in most workplaces.  The practical alternative is less scientific, less precise, but simple to implement on an individual basis.  The color of a person’s urine can warn of his/her worsening hypohydration and potential for heat illness (see Part 3).  A visual guideline for evaluation is provided in Exhibit 4.
Let’s Be Direct
     Three types of temperature or heat stress indices have been developed for various purposes.  Some are used to assess or predict comfort levels in noncritical environments, while others are used to protect workers from heat illness, extract maximum performance from an individual or battalion, or in pursuit of other consequential objectives.
     Rational indices are based on the heat balance equation (see Part 2).  These are the most accurate because they account for all mechanisms of heat transfer between the human body and its surroundings.  For the same reason, however, rational indices are also the most difficult to develop; the measurements required are infeasible outside a controlled research environment.  Their complexity and subsequent lack of practicality has excluded rational indices from this presentation.
     Two fatal flaws have excluded empirical indices from this presentation:  self-reported data and subjectivity of assessments.  Self-reported data is notoriously unreliable, as imperfect memory, motivated thinking, or other influences cause distortions in the record.  Subjective assessments have low repeatability that introduces large errors in results.  These flaws render empirical indices impractical for use in workplaces where consistent policies and procedures are required.
     Thus, we must rely on direct indices to assess workplace conditions.  Direct indices are derived from measurements of environmental parameters.  Such measurements are noninvasive; they do not interrupt workflows or require participation or attention from workers.
     Heat Index (HI), humidex, Wet Bulb Globe Temperature (WBGT), and Wet Globe Temperature (WGT) are direct indices of varying complexity.  The Thermal Work Limit (TWL) requires body dimensions, but these measurements need not be repeated frequently.  Metabolic work rates can be estimated, maintaining the noninvasive nature of the index.
 
     The indices presented in this installment are only a sample of a much wider array of heat stress models available.  Investigation of others may be necessary to develop confidence in the protection a chosen index affords workers.  Any effort to better understand the landscape of heat stress risks, evaluations, and countermeasures are worthwhile investments in worker safety.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Human Factors in Technology.  Edward Bennett, James Degan, Joseph Spiegel (eds).  McGraw-Hill Book Company, Inc., 1963.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “NIOSH Criteria for a Recommended Standard Occupational Exposure to Heat and Hot Environments.”  Brenda Jacklitsch, et al.  National Institute for Occupational Safety and Health (Publication 2016-106); February 2016.
[Link] “Thermal Indices and Thermophysiological Modeling for Heat Stress.”  George Havenith and Dusan Fiala.  Comprehensive Physiology; January 2016.
[Link] “The Heat Index Equation.”  National Weather Service Weather Prediction Center; May 12, 2022.
[Link] “What is a Heat Stress Index?”  Ross Di Corleto.  The Thermal Environment; February 22, 2014.
[Link] “Three instruments for assessment of WBGT and a comparison with WGT (Botsball).”  B. Onkaram, L. A. Stroschein, and R. F. Goldman.  American Industrial Hygiene Association Journal; June 4, 2010.
[Link] “The Assessment of Sultriness. Part I: A Temperature-Humidity Index Based on Human Physiology and Clothing Science.”  R.G. Steadman.  Journal of Applied Meteorology and Climatology; July 1979.
[Link] “The Assessment of Sultriness. Part II: Effects of Wind, Extra Radiation and Barometric Pressure on Apparent Temperature.”  R.G. Steadman.  Journal of Applied Meteorology and Climatology; July 1979.
[Link] “Globe Temperature and Its Measurement: Requirements and Limitations.”  A. Virgilio, et al.  Annals of Work Exposures and Health; June 2019.
[Link] “Heat Stress Standard ISO 7243 and its Global Application.”  Ken Parsons.  Industrial Health; April 2006.
[Link] “Heat Index.”  Wikipedia.
[Link] “Thermal Work Limit.”  Wikipedia.
[Link] “Thermal comfort and the heat stress indices.”  Yoram Epstein and Daniel S. Moran.  Industrial Health; April 2006.
[Link] “The Thermal Work Limit Is a Simple Reliable Heat Index for the Protection of Workers in Thermally Stressful Environments.”  Veronica S. Miller and Graham P. Bates.  The Annals of Occupational Hygiene; August 2007.
[Link] “The Limitations of WBGT Index for Application in Industries: A Systematic Review.”  Farideh Golbabaei, et al.  International Journal of Occupational Hygiene; December 2021.
[Link] “The Heat Index ‘Equation’ (or, more than you ever wanted to know about heat index) (Technical Attachment SR 90-23).”  Lans P. Rothfusz.  National Weather Service; July 1, 1990.
[Link] “Evaluation of Occupational Exposure Limits for Heat Stress in Outdoor Workers — United States, 2011–2016.”  Aaron W. Tustin, MD, et al.  Morbidity and Mortality Weekly Report (MMWR).  Centers for Disease Control and Prevention; July 6, 2018.
[Link] “Occupational Heat Exposure. Part 2: The measurement of heat exposure (stress and strain) in the occupational environment.”  Darren Joubert and Graham Bates.  Occupational Health Southern Africa Journal; September/October 2007.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 3:  Heat Illness and Other Heat-Related Effects]]>Wed, 14 Jun 2023 05:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-3-heat-illness-and-other-heat-related-effects     When the human body’s thermoregulatory functions are unable to maintain heat balance in a hot environment, any of several maladies may result.  Collectively known as “heat illness,” these maladies vary widely in severity.  Therefore, a generic diagnosis of heat illness may provide insufficient information to assess future risks to individuals and populations or to develop effective management plans.
     This installment of the “Thermal Work Environments” series describes the range of heat illnesses that workers may experience.  This information can be used to identify risk factors and develop preventive measures.  It also facilitates effective monitoring of conditions, recognition of symptoms, and proper treatment of heat-effected employees.
Heat-Related Illness
     The following descriptions of heat-related illnesses are presented in order of increasing severity, though individual sensitivities and proclivities renders this sequence an approximation.  Also, it should not be assumed that these illnesses will always be experienced in the same way.  For some, symptoms of lower-level illness may not be present, or detectable, prior to onset of more-severe heat illness.  However, when symptoms of “minor” illness do appear, they must be given prompt attention to prevent the person’s physical condition from degrading further.

Warm Discomfort
     While not truly a heat illness itself, the initial discomfort experienced due to heat is the first warning sign of impending heat stress and that management of the thermal environment may be necessary.  Although a person’s experience does not always reflect a clear progression of effects, warm discomfort is a likely predecessor to subsequent heat illness.

Dehydration
     Although dehydration can occur in any environmental conditions, it is most-often associated with heat stress.  Elevated temperature accelerates fluid loss; a demanding work cycle can limit a person’s opportunities to rehydrate or his/her conscious awareness of the need.  These demands exacerbate physical conditions that may exist upon entering the work environment.  Specifically, beginning work with a suboptimal hydration level (i.e. hypohydration) increases a worker’s risk of severe dehydration or heat illness.

Heat Rash
     Sometimes called “prickly heat,” heat rash is a common occurrence in hot work environments; it occurs during profuse sweating.  It is caused by sweat ducts becoming blocked, forcing sweat into surrounding tissue.  It is characterized by clusters of small blisters that give the skin a bumpy, red, or pimply, appearance.  It appears most often on the neck, upper chest, and anywhere that skin touches itself, such as elbow joint creases, or where excretion of sweat is otherwise restricted.
     Dismissing heat rash as an aesthetic affliction is a mistake.  It is an indication that thermoregulatory function has been inhibited to some degree and should not be ignored.  Unchecked, it could accelerate onset of more-severe heat illness.
     The most effective response to heat rash is to move to a cooler, less humid environment.  Unfortunately, this is not often a realistic option.  Therefore, the person’s overall condition should be monitored to prevent worsening illness.  The area of the rash should be kept dry; powder may be applied for comfort, but anything that warms or moistens the skin should be avoided.

Heat Cramps
     Uncontrolled contractions or spasms, usually in the legs or arms, often result from the loss of fluid or salts when sweating.  Strong, painful muscle contractions are possible, even when a person has been drinking water.  If body salts, such as sodium and potassium, are depleted without replenishment, heat cramps are often the result.
     To offset the effects of profuse sweating, electrolyte-replacement drinks (i.e. “sports drinks”) should be added to the hydration regimen.  Eating an occasional snack is an alternate method of salt replenishment that may better serve a worker’s energy requirements than liquids alone.

Heat Syncope
     Syncope is the occurrence of dizziness, lightheadedness, or fainting.  Onset is usually caused by standing for an extended period of time or suddenly rising from a seated or prone position.  Dehydration and lack of acclimation to the hot environment may be contributing factors to the occurrence of heat syncope.

Heat Exhaustion
     It may be reasonable to address the heat-related illnesses previously discussed without medical attention beyond the assistance provided by coworkers.  A case of heat exhaustion, however, warrants professional medical care to ensure proper treatment and recovery.
     Heat exhaustion is caused by extreme dehydration and loss of body salts.  It is characterized by several possible symptoms, including headache, nausea, thirst, irritability, confusion, weakness, and body temperature exceeding 100.4° F (38° C).
     First aid for heat exhaustion includes moving the person to a cooler environment and encouraging him/her to take frequent sips of cool water.  Apply cold compresses to the person’s head, neck, and face; if cold compresses are not available, rinse the same areas with cold water.  Unnecessary clothing, including shoes and socks, should be removed; this is particularly important if the person wears impermeable protective layers, such as a chemical-resistant smock, leather garment or boots, etc.  At least one person should stay with the stricken worker, continuing the actions described, until s/he is placed in the care of medical professionals.  At such time, provide all pertinent information to expedite effective treatment.

Rhabdomyolysis
     Protracted physical exertion under heat stress can cause muscles to break down, releasing electrolytes, primarily potassium, and proteins into the bloodstream.  An elevated level of potassium can cause dangerous heart rhythms and seizures and large protein molecules can cause kidney damage.
     Symptoms of rhabdomyolysis include muscle pain and cramps, swelling, weakness, reduced range of motion, and dark urine.  There is an elevated risk of misdiagnosis due to the similarity of the commonly-experienced symptoms to those of less-severe afflictions.  Tests can be performed to ensure proper diagnosis and reduce the risk of future complications.

Acute Kidney Injury
     One cause of kidney damage, as mentioned above, is the release of proteins from muscles that the kidneys are unable to process effectively.  It may also occur as a result of prolonged heavy sweating.  Low fluid and sodium levels (hypohydration and hyponatremia, respectively) impede normal renal function.  Unresolved, this can lead to kidney failure and the need for dialysis.  An effective hydration regimen is critical to kidney health.

Heat Stroke
     When the body’s thermoregulatory functions can no longer manage the heat stress to which it is subjected, heat stroke is the ultimate result.  Onset of heat stroke is typically characterized by hot, dry skin and body temperature exceeding 104° F (40° C).  The victim may also be confused or disoriented, slur speech, or lose consciousness.  Rapid, shallow breathing and seizures are also potential symptoms of heat stroke.
     Two types of heat stroke are possible:  classic and exertional.  The two are differentiated by several factors, summarized in Exhibit 1.  The key distinction is that classic heat stroke occurs during activity of much lower intensity than that inducing exertional heat stroke.  Sweating often continues during exertional heat stroke, eliminating an easily-identifiable symptom and potentially causing dangerous underestimation of the severity of a victim’s condition.
     First aid for both types of heat stroke is very similar to that for heat exhaustion, though more aggressive.  Additional cold compresses should be applied, particularly to the armpits and groin.  More thorough soaking with cold water, or in an ice bath, with increased air movement, should be provided to the extent possible.  Emergency medical care is a necessity for every heat stroke victim.

Death
     Undiagnosed or untreated heat illness can escalate rapidly.  Ignoring early warning signs places all workers in a hot environment at greater risk of heat stroke or other serious injury.  With a mortality rate of ~80%, heat stroke victims require immediate attention to have any hope of recovery; a body temperature exceeding 110° F (43.3° C) is almost always fatal. 
     Hot environments pose a greater risk to life than do cold environments.  There are three key reasons for this:
  1. Normal body temperature (98.6° F/37° C on average) is much closer to the safe upper limit (~104° F/40° C) than to the lower limit (~77° F/25° C).
  2. All external sources of heat load are in addition to metabolic heat, which is continually generated.
  3. “Excessive motivation,” whether positive (e.g. intrinsic desire to perform) or negative (e.g. avoidance of punitive action), can cause a person to ignore symptoms and warning signs of developing heat illness.
     Survivors of heat stroke often suffer from damage to vital organs, such as heart, kidneys, and brain.  Injuries associated with heat stroke typically require life-long vigilance in medical care and may be a victim’s ultimate cause of death.

Other Heat-Related Effects
     There are risks associated with hot work environments that are not adequately described in the “traditional” sense of heat illness.  A workplace with a radiant heat source (other than the sun) places workers at risk of burns.  The source of radiant heat is a hot object, often a furnace, forge, or other process equipment.  Workers may be required to be in close proximity to such equipment to operate or interact with it, such as when loading or unloading material.  A small misstep could cause a person to come in contact with the equipment or heated material.  Even with protective gear in proper use, direct contact could result in a severe burn.
     Thus far, the afflictions discussed have been physical in nature.  However, there are also heat-related cognitive effects to consider.  Tests conducted on subjects under heat stress have demonstrated the potential for significant cognitive impairment during extended exposure.
     Test subjects experienced reductions in working memory and information-processing capability.  Other results showed that stimulus-response times and error rates increased, with a subsequent increase in total task time.  Performance of complex tasks was effected to a greater degree than simple tasks, suggesting that subjects’ ability to concentrate had been negatively impacted by prolonged heat stress.
     The potential effect on productivity and quality of impaired task performance is easy to infer.  Less obvious, perhaps, is the increased risk of injury that results from reduced information-processing capability and increased reaction time.  Any lag in recognizing a dangerous condition, formulating an appropriate response, and executing it significantly increases risk to personnel and property.
 
     Discussion of each of the heat-related illnesses and other effects has implicitly referenced the time during a work shift.  The time between work shifts is also critically important to the well-being of workers returning to a hot environment day after day.
     The duration of the gap between shifts and the activities in which a person engages during that time determine his/her condition at the beginning of the next shift.  For optimum health and performance in subsequent shifts, workers should consider the following recovery plan elements:
  • Spend the “downtime” in a cool, dry (i.e. air-conditioned) environment.
  • Replenish fluids (“euhydrate”) and salts, with attention paid to achieving a proper balance.  Avoid alcoholic and heavily-caffeinated beverages.
  • Reduce physical activity, allowing the heart, muscles, and brain to recover.
  • Get plenty of rest; the human body “rebuilds” while sleeping.
Risk Factors
     Workers in hot environments are exposed to a number of risk factors for heat-related illness.  The diagram in Exhibit 2 names a baker’s dozen of them; several have already been discussed in this series.  For example, the presentation of the heat balance equation in Part 2 included discussion of several of these factors, including temperature and humidity, radiant heat sources, physical exertion, and medications.  Others are discussed further in subsequent installments of the “Thermal Work Environments” series.
     The effects of heat stress range from mild to severe, even fatal.  Protection from heat illness begins with a cohesive team, whose members look after one another and respond appropriately to the earliest signs of onset.  To be effective guardians of their own health and that of their teammates, workers must possess an understanding of heat illness, common symptoms, and first aid treatments.  Tolerance to heat varies widely among individuals; the ability to recognize changes in a person’s condition or behavior, in the absence of sophisticated monitoring systems, is paramount.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Human Factors in Technology.  Edward Bennett, James Degan, Joseph Spiegel (eds).  McGraw-Hill Book Company, Inc., 1963.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “NIOSH Criteria for a Recommended Standard Occupational Exposure to Heat and Hot Environments.”  Brenda Jacklitsch, et al.  National Institute for Occupational Safety and Health (Publication 2016-106); February 2016.
[Link] “Occupational Heat Exposure. Part 1: The physiological consequences of heat exposure in the occupational environment.”  Darren Joubert and Graham Bates.  Occupational Health Southern Africa Journal; September/October 2007.
[Link] “Workers' health and productivity under occupational heat strain: a systematic review and meta-analysis.”  Andreas D. Flouris, et al.  The Lancet Planetary Health; December 2018.
[Link] “Evaluating Effects of Heat Stress on Cognitive Function among Workers in a Hot Industry.”  Adel Mazloumi, Farideh Golbabaei, et al.  Health Promotion Perspectives; December 2014.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 2:  Thermoregulation in Hot Environments]]>Wed, 31 May 2023 06:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-2-thermoregulation-in-hot-environments     The human body reacts to exposure to – and generation of – heat by activating various system responses.  The nervous, cardiovascular, respiratory, and exocrine systems are key players in the physiological behavior of workers subject to heat stress.  Effective thermoregulation requires that these systems operate in highly-interconnected ways.
     This installment of the “Thermal Work Environments” series provides an overview of the human body’s thermoregulatory functions that are activated by heat stress and introduces the heat balance equation.  Each component of the heat balance equation is described in terms of physiological and environmental factors that impact thermoregulation.
Thermoregulatory Function
     Core body temperature is regulated by the hypothalamus, located at the base of the brain (see Exhibit 1); its functions are divided between two areas.  The anterior hypothalamus manages heat-dissipative functions, such as vasodilation and sweat production.
     Vasodilation results in increased blood flow to the outer regions of the body, transferring heat from the core to the skin.  A corresponding rise in heart rate increases the rate of heat transfer from the core to extremities.
     Rising skin temperature prompts sweat production.  Evaporation of sweat from the skin is the largest contributor to heat loss from the body; improving its efficiency is a common goal in hot environments.  It is also the reason that proper hydration is critical to maintaining well-being in a hot environment.
     Respiration also contributes to heat loss, as inhaled air is warmed by the body before being expelled.  That is until the ambient temperature reaches or exceeds that of the body, at which time, it begins to increase heat stress.  Respiration plays a lesser role in humans than in other animals.  Dogs, for example, pant to increase respiratory heat loss; it is a larger contributor for them, relative to other mechanisms, than for humans.
     These are the primary control mechanisms that act in concert to regulate core body temperature.  These controls are activated automatically, often without our conscious awareness.  Other responses to heat stress require active engagement, such as monitoring physical and environmental conditions, adjusting clothing and equipment, and developing work-rest cycles and contingency plans.  These factors are relevant to the pursuit of heat balance.
 
Heat Balance
     Homeothermy requires a balance between the heat generated or absorbed by the body and that which is dissipated from the body.  In “perfect” equilibrium, the net heat gain is zero.  Zero heat gain implies that the body’s thermoregulatory response functions (i.e. heat strain) are sufficient to maintain a constant core temperature in the presence of heat stress.
     As mentioned in Part 1, heat stress and heat strain are quantifiable, typically presented in the form of a heat balance equation.  The form of heat balance equation used here is
     S = M + W + C + R + K + E + Resp,
where S is heat storage rate, M is metabolic rate, W is work rate, C is convective heat transfer (convection), R is radiative heat transfer (radiation), K is conductive heat transfer (conduction), E is evaporative cooling (evaporation), and Resp is heat transfer due to respiration.  Each value is positive (+) when the body gains thermal energy (“heat gain”) and negative (-) when thermal energy is dissipated to the surrounding environment (“heat loss”).  Each term can be expressed in any unit of energy or, if time is accounted for, power, but consistency must be maintained.  The following discussion provides some detail on each component of the heat balance equation in the context of hot environments.
 
     The “perfect” equilibrium mentioned above and, thus, constant core temperature is achieved when S = 0.  This situation is more hypothetical than realistic however.  Fluctuations of body temperature occur naturally and, within limits, are no cause for concern.  For example, a person’s core temperature varies according to his/her circadian or diurnal rhythm.  Despite a range of up to 3°F (1.7°C), these fluctuations go largely unnoticed.
     The average “normal” temperature, 98.6°F (37.0°C), is cited frequently.  Less common, however, is discussion of a range of “safe” temperatures.  Most people maintain normal physiological function in the 97 – 102°F (36.1 – 38.9°C) range of core temperature.  It is also worth noting that these values refer to oral temperature; rectal temperatures are usually ~ 1°F (0.6°C) higher.  While rectal temperature is a more accurate measure of core temperature, the limitations on its use in most settings should be obvious.
     Because the human body is sufficiently resilient to accommodate significant temperature fluctuations, S = 0 can be treated as a target averageHeat storage in the body (S) will vary as the body’s thermoregulatory control “decisions” are executed.  Heat storage can become dangerous when S > 0 for an extended period, trends upward, or becomes exceptionally high.
 
     The metabolic rate (M) is the rate at which the body generates heat, corresponding to work demands and oxygen consumption.  Precise measurements are typically limited to research settings; workplace assessments of heat stress typically use estimates or “representative values.”  Exhibit 2 provides a guide for selecting a representative metabolic rate for various scenarios.
     M is always positive (M > 0), representing thermal energy that must be dissipated in order to maintain a constant core temperature.  It may also be called the “heat of metabolism;” heat is generated by chemical reactions in the body, even in the absence of physical work.
     A number of factors can effect a person’s metabolic rate.  Several are presented, in brief, below:
  • The value of M when a person is at rest under normal conditions is called the basal metabolic rate (BMR).  It is the minimum rate of metabolic heat generation, primarily influenced by thyroid activity, upon which other influences build.
  • The size of a person’s body influences his/her BMR; “larger individuals have greater energy exchanges than smaller persons.” (Bennett, et al)  Though the effect on BMR has not been found to be proportional to any single measure of body size, it varies to the 2/3 or 3/4 power of a person’s weight.
     Related research suggests that only fat-free tissues of the body contribute to basal heat production.  This finding may encourage the use of body mass index (BMI) calculations, though they are notoriously unreliable.  More accurate methods of determining body fat content exist, but they are more difficult to execute.  Their use, therefore, is typically limited to in-depth research scenarios.
  • A person experiencing a period of growth requires additional energy.  Therefore, a young person has a higher BMR than a comparably-sized adult.
  • A person’s diet influences heat production through the specific dynamic action (SDA) of the food ingested.  SDA is the heat generated by digestion in excess of the energy value of the food.  Protein produces the highest SDA, while fat and carbohydrates have less impact.
  • The use of drugs – prescription or otherwise – may influence a person’s metabolism.  Some effects may be desired, improving a condition being treated, while others may be unintended and detrimental.
  • Physical activity, or “muscular exercise,” changes the energy requirements of the body; some energy is, inevitably, converted to heat.
  • A person’s core temperature also influences his/her metabolic rate; M increases ~7% per degree Fahrenheit (0.6°C) rise in core temperature.  This cyclical influence on the heat balance can contribute to a “runaway” thermal condition if not properly managed.

     The work rate (W) represents the portion of energy consumed in the performance of work that is not converted to heat.  Many formulations of the heat balance equation exclude this negative (heat-reducing) term, deeming it safe to ignore, as it is usually less than 10% of M.
 
     Heat dissipation via convection (C) begins with the circulatory system.  Heat from the body’s core and muscles is transferred to the skin, preventing “hot spots” that could damage organs or other tissue.  From the skin, heat is transferred to the surrounding air (C is negative), assuming the ambient temperature is lower than the skin temperature.  If the reverse is true, C becomes positive, adding to the body’s heat load.
 
     The radiation (R) term refers, specifically, to infrared radiation exchanged between the body and nearby solid objects.  The skin acts as a nearly-perfect black body; that is, it efficiently absorbs (positive R) and emits (negative R) infrared radiation.  Like convection, the sign (direction) of radiative heat transfer depends on the temperature of the skin relative to that of the surroundings.  A person’s complexion has no effect on infrared radiation or radiative heat transfer.
 
     Heat transfer by conduction (K) is not common in workplaces, as it requires direct contact with a solid object.  Where it does exist, it is often highly localized and transient, such as in the hands during manual manipulation of an object.  It is positive when touching a hot object and negative when touching a cold one.  Contact with objects made through clothing is considered “direct contact” for purposes of heat stress assessment.
 
     In this formulation of the heat balance equation, the evaporation (E) term captures the cooling effect of sweat evaporating from the skin.  In mild conditions, the amount of sweat produced may be imperceptible, but it is not insignificant.  This “insensible water loss” can approach 1 qt (0.9 L) per day, dissipating ~25% of basal heat production.  During strenuous physical activity, the body can produce more than 3.2 qt (3 L) of sweat in one hour.
     This component is often called evaporative cooling; as this term implies, E is always negative.  Several physical and environmental conditions place limitations on the capacity of evaporative cooling.  Proper hydration is necessary to sustain the high sweat rates that produce maximum cooling.  Clothing and protective gear may limit the interface area available or the efficiency of evaporation.
     Ambient conditions significantly impact the body’s ability to cool itself via evaporation.  As humidity increases, the rate of evaporative cooling decreases.  Increasing air speed enhances evaporation, though no additional benefit is gained at speeds above ~6.7 mph (3 m/s) or air temperature above 104°F (40°C).  When air temperature exceeds skin temperature, low humidity is needed for evaporation to compensate for convective heat gain to maintain a net heat loss.  In favorable conditions, evaporative cooling is the single greatest contributor to heat loss from the body.
 
     Heat loss due to respiration (Resp) may be difficult to quantify.  In many formulations of the heat balance equation, it is included in the evaporation (E) term, as the largest contribution comes from expelling water vapor.  There is also heating of the air while in the lungs, though it may be a relatively small heat transfer.
     Resp is usually negative, but could become positive in very high air temperatures.  Such conditions are not common in workplaces, as this type of environment is often deemed unsafe for various reasons.  In most cases, access to such an environment is restricted to individuals with protective gear, such as breathing apparatus, limited in duration, and closely monitored.
     Though the respiration component may be difficult to quantify, independent of other mechanisms of heat loss, inclusion of the Resp term in this discussion is useful for practical purposes.  Managing heat and fluid loss in a hot environment is aided by simply recognizing that respiration makes a contribution, even if its magnitude is unknown.  The direction of heat transfer is usually understood intuitively; this may be the only information needed for workers to take additional precautions to ensure their well-being.
 
     The body’s heat balance is pictorially represented as a mechanical balance scale in Exhibit 3.  It presents factors that increase and decrease core temperature as well as the “normal” range of variation throughout the day.  A visual reference can be a useful tool, as it is more intuitive than a written equation, promoting deeper understanding that aids practical application of information.

     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Thermal Work Environments” entries on “The Third Degree,” see Part 1:  An Introduction to Biometeorology and Job Design (17May2023).
 
References
[Link] Human Factors in Technology.  Edward Bennett, James Degan, Joseph Spiegel (eds).  McGraw-Hill Book Company, Inc., 1963.
[Link] Kodak's Ergonomic Design for People at Work.  The Eastman Kodak Company (ed).  John Wiley & Sons, Inc., 2004.
[Link] “Hypothalamus” in Encyclopedia of Neuroscience.  Qian Gao and Tamas Horvath.   Springer, Berlin, Heidelberg; 2009.
[Link] “Thermal Indices and Thermophysiological Modeling for Heat Stress.”  George Havenith and Dusan Fiala.  Comprehensive Physiology; January 2016.
[Link] “Occupational Heat Exposure. Part 1: The physiological consequences of heat exposure in the occupational environment.”  Darren Joubert and Graham Bates.  Occupational Health Southern Africa Journal; September/October 2007.
[Link] “NIOSH Criteria for a Recommended Standard Occupational Exposure to Heat and Hot Environments.”  Brenda Jacklitsch, et al.  National Institute for Occupational Safety and Health (Publication 2016-106); February 2016.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Thermal Work Environments – Part 1:  An Introduction to Biometeorology and Job Design]]>Wed, 17 May 2023 05:00:00 GMThttp://jaywinksolutions.com/thethirddegree/thermal-work-environments-part-1-an-introduction-to-biometeorology-and-job-design     In the minds of many readers, the term “thermal environment” may induce images of a desert, the Arctic, or other thoughts of extreme conditions.  While extreme conditions require intense planning and preparation, they merely bookend the range of work conditions that require consideration.  That is to say that the environmental conditions of all workplaces should be thoroughly assessed and the impacts on the people within them properly addressed.
     The ensuing discussion is generalized to be applicable to a wide range of activities.  The information presented in this series is intended to be universally applicable in manufacturing and service industries.  Additional guidance may be available from other sources; readers should consult industry- or activity-specific organizations for detailed information on best practices and regulations that are beyond the scope of this series.
Terms in Use
     Most of the terms used in this series are in common use, though their application to the subject at hand may be unfamiliar to some readers.  The usage of some of these terms is presented here to facilitate comprehension of information throughout the series.
     It seems logical to begin with the title of the series:  “Thermal Work Environments.”  This term was chosen to limit the scope of discussion to environmental conditions found in workplaces, differentiating them from military operations, athletics, and leisure activities.  The information provided remains valid in these contexts, but the objectives and decision-making, not to mention the clothing requirements, differ sufficiently to warrant explicit exclusion from the discussion of work environments.  Environmental considerations in these settings will be addressed, briefly, however, as a related topic.
     “Thermal,” as used here, has multiple connotations.  For one, it refers to the homeothermic nature of human beings.  The human body attempts to maintain a constant core temperature irrespective of its surroundings; homeo ≈ same, therm ≈ temperature.  With regard to surroundings, it is an umbrella term that encompasses several variables that influence a person’s perception of temperature and assessment of comfort.  These include the actual (air) temperature, humidity, air movement (e.g. wind), sunlight, and other sources of radiation.
     Comfort, as referenced above, is the subjective, individual perception of conditions.  Only thermal comfort will be considered here.  There are important distinctions between comfort, stress, and strainStress and strain can be related to either high or low temperatures:
  • stress – the net effect (i.e. heat load) of metabolic heat generation, clothing, and environmental conditions to which an individual is subject.
  • strain – the physiological response to stress; i.e. changes in the body, made automatically, to retain (cold strain) or dissipate (heat strain) thermal energy.
Stress and strain are quantifiable phenomena, whereas comfort is a qualitative judgment of thermal stress or its absence.
     Even “heat” and “cold” warrant explicit mention, as their use blurs vernacular and technical meaning.  Technically speaking, heat is thermal energy; cold has no technical definition.  An attempt at rigid adherence to technical terminology in this discussion would be futile and counterproductive.  Conventional (i.e. vernacular) use of the terms suffice:
  • heat – high or excess thermal energy; cooling desired.
  • cold – low or insufficient thermal energy; warming desired.
A schematic representation of the continuum of thermal stress that spans these terms is provided in Exhibit 1.
     Coming full circle, we return to the title of this installment.  All of the terms discussed thus far are used in reference to biometeorology – the study of the effects of atmospheric conditions, such as temperature and humidity, be they naturally-occurring or artificially generated, on living organisms.  Our interest, of course, is in human biometeorology and how it influences job design.
     Job design defines a variety of elements of a person’s workplace experience. These may include the physical layout of a workstation or entire facility, equipment used, policies and procedures to be followed, and the schedule according to which tasks are performed.  All aspects of how and when work is performed are part of its job design.
     Understanding how the terms presented here are used is necessary to comprehend this series as a whole.  Other terms are introduced throughout the series in the context of relevant discussions.
 
Structure of the Series
     The “Thermal Work Environments” series is presented in several parts, in three loosely-defined “sections.”  The first section discusses hot environments, including the physiological effects on people working in elevated temperatures.  Measurements and calculations used to define and compare environmental conditions – specifically, the heat stress caused – are also presented.  Finally, recommendations are provided to assist those designing and performing tasks in minimizing the detrimental effects of heat stress.
     The second section discusses cold environments.  The presentation of information mirrors that of hot environments in the first section.  The final section is comprised of discussions of related topics that, while useful, could not be included seamlessly in the first two sections.
     The series structure described was derived with the following objectives:
  • Limit the length and scope of each installment so that they are easy to consume and comprehend.
  • Simplify future references to relevant material by making it easy to locate within the series’ installments.
  • Promote a holistic approach to job design in environments subject to seasonal (or similar) variations by facilitating side-by-side comparison of considerations relevant to hot and cold environments.
  • Simplify expansion or modification of the series to maintain its utility as knowledge and practices evolve.
     There is a series directory at the end of this post.  Links will be added as new installments are published; returning to this post provides quick and easy access to the entire series.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] “A glossary for biometeorology.”  Simon N. Gosling, et al.  International Journal of Biometeorology; 2013.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
 
Directory of “Thermal Work Environments” entries on “The Third Degree.”
Part 1:  An Introduction to Biometeorology and Job Design (17May2023)
Part 2:  Thermoregulation in Hot Environments (31May2023)
Part 3:  Heat Illness and Other Heat-Related Effects (14Jun2023)
Part 4: A Measure of Comfort in Hot Environments (28Jun2023)
Part 5:  Managing Conditions in Hot Environments (12Jul2023)
]]>
<![CDATA[Toxicity]]>Wed, 03 May 2023 05:00:00 GMThttp://jaywinksolutions.com/thethirddegree/toxicity     A toxic culture can precipitate a wide range of deleterious effects on an organization and individual members.  The toxicity of an organization becomes obvious when overt behaviors demonstrate blatant disregard for social and professional norms.  These organizations often become fodder for nightly “news” broadcasts as they are subject to boycotts, civil litigation, and criminal prosecution.
     An organization’s toxicity can also manifest much less explicitly.  Subtle behaviors and surreptitious actions are more difficult to detect or to evince intent.  It is this uncertainty that allows toxic cultures to persist, to refine and more-effectively disguise maladaptive behaviors.
     To combat organizational toxicity, leaders must appreciate the importance of a healthy culture, recognize the ingredients of toxic culture, and understand how to implement effective countermeasures.
What It Is and Why It Matters
     “Culture,” in general, and “corporate culture,” specifically, can be defined in myriad ways.  For simplicity and convenience, we will rely on our constant companion, dictionary.com, for ours:  “the values, typical practices, and goals of a business or other organization, especially a large corporation” (def. 7).  Each of the components of this definition – values, practices, and goals – contribute extensively to the culture created within an organization.
     An organization’s values are the ideals pursued, as matters of course, during normal operations or, perhaps more accurately, those espoused by the organization’s leadership.  These often include things like diversity, community involvement, environmental protection, innovation, personal development, cultural sensitivity, and a host of other genuine interests and platitudes.
     An organization’s goals should be derived directly from its values.  Financial goals are obvious requisites, but environmental, personnel development, and community project goals may also be established.  Some goals may be publicly announced, while others are only discussed internally.
     The practices in which an organization engages – or tolerates – demonstrate the extent to which its values are honored while pursuing its goalsPractices that are not aligned with stated values, whether organizational or individual in nature, are sources of toxicity.
     “Toxicity” is a generalized term used to describe any aberrant behavior, environmental condition, or negative affect that undermines team cohesiveness, effective decision-making, individual performance, or well-being.  A “toxic workplace culture” is one in which toxicity is encountered with regularity by one or more individual or group.  An important and far too-prevalent example of toxic culture is discussed in “Managerial Schizophrenia and Workplace Cancel Culture” (9Mar2022).
 
     The individual members of an organization are like the cells of a living organism.  Poor functioning or loss of one cell may be easily overcome; however, as the number of poisoned cells increases, functioning of the entire organism degrades.  Likewise, as toxic culture spreads within an organization, its success and survival are jeopardized.
     Deleterious effects of toxic culture exhibit a compounding nature.  It progresses from individuals to those around them and then to larger and larger groups.  Without effective intervention, the spread of toxic culture and its consequences to the entire organization is inevitable.  It may be tempting to call it a domino effect, but it is much more complex than that.  The spread of toxic culture is not as linear or predictable as falling dominoes.
     This progression is now discussed in brief; a thorough exploration of all possible paths and consequences of toxic culture development is beyond the scope of this presentation.  It should suffice, however, to convince readers that workplace culture is worthy of rigorous scrutiny and course correction.
 
     The first recognizable symptom of a toxic workplace culture is often an individual’s increasing stress level.  This does not include the stress induced by a challenging project or looming deadline (assuming these were appropriately assigned); these are often considered forms of “good stress” that motivate and inspire people to do their best work.  Instead, this refers to “bad stress” – that which is unnecessary and undeserved.  Stress and dissatisfaction tend to increase, causing additional problems for the individual, such as self-doubt, burnout, and other mental health concerns.  Left unchecked, stress can also lead to physical illness as serious as heart disease or other chronic disorder.
     The effects on an individual impact coworkers in two key ways.  First, relationships may be strained, as individuals’ responses to elevated stress are often unhealthy interpersonally.  Second, the coworkers’ workload often increases as a result of the individual’s reduced productivity and increasing absenteeism.  Any project team, department, or committee of which the individual is a member is, thus, less effective.  This can create a spiral where one team member “drops out,” raising the stress levels of others, who eventually succumb to its ill effects.
     Weakening financial performance is a common downstream effect of toxicity.  However, the influence of an organization’s culture on its financial performance is often recognized only post mortem; that is, after a business has collapsed or is in crisis.  In most cases, there are plenty of signs – big, flashing, neon signs – that are simply ignored until irreparable damage has been done.
     Reduced productivity, engagement, and innovation are clear signals that trouble is brewing.  Rising healthcare costs, absenteeism, and attrition also provide reliable warnings.  Difficulty recruiting new employees can also be a sign that those outside the organization recognize a problem, even if those inside it are in denial.
     As “good” people depart, those left behind are stressed by an increasing concentration of toxicity, accelerating the organization’s demise.  This can be brought about through financial collapse or accelerated by noncompliance and corruption.  Once civil litigation and criminal prosecution of officers begins, survival of the organization is uncertain at best.
     An organization’s culture generates various cycles of behavior.  These can be virtuous cycles that reinforce positive behaviors and support long-term goals or vicious cycles that drive away ethical, high-performing team players.  Every behavior is endorsed, either explicitly or implicitly, or interrupted; the choice is made by leaders throughout an organization during every cycle.  Defeating vicious cycles requires consistent interruption with demonstrations of proper behavior that begin new virtuous cycles.
 
Characteristics of Toxic Culture
     Researchers at CultureX have identified five characteristics of “corporate” culture that push an organization beyond annoying or frustrating to truly toxic.  The “Toxic Five” are:  disrespectful, noninclusive, unethical, cutthroat, and abusive.
     A disrespectful environment exerts a strong negative influence on employee ratings of their workplace.  A somewhat generic term, disrespect includes any type of persistent incivility and may overlap other characteristics of toxic culture.  Being dismissive of one’s ideas or inputs without proper consideration is a common form of disrespect experienced in toxic cultures.
     Noninclusive workplaces are those in which employees are differentially valued according to traits unrelated to any measure of merit.  Demographic factors relevant to noninclusive cultures include race, gender, sexual orientation, age, and disability.  Any type of discrimination or harassment based on these traits is evidence of a noninclusive culture.
     Pervasive cronyism, where “connections” afford special privileges, is also indicative of a noninclusive culture.  “General noninclusive culture” refers to sociological in-group and out-group behavior; at its extreme, one or more colleagues may be ostracized by a larger or more-entrenched group.  Cliques are not just for high school anymore!
     Unethical behavior can also take many forms; it may be directed at peers, subordinates, superiors, customers, suppliers, or any stakeholder that could be named.  It could involve the use or disclosure of employees’ personal information, falsifying regulatory, financial, or other documentation, misleading or intentionally misdirecting subordinates or managers, or myriad other inappropriate actions or omissions.  Ethics is a broad topic, a proper exploration of which is beyond the scope of this presentation.
     A cutthroat environment is one in which coworkers actively compete amongst themselves.  This type of culture discourages cooperation and collaboration; instead, employees are incentivized to undermine one another.  In extreme cases, sabotage, by physical or reputational means, may be committed to maintain a favorable position relative to a coworker.  Workplace Cancel Culture thrives in cutthroat environments.
     An abusive culture refers, specifically, to the behavior of supervisors and managers.  Supervisors may be physically or verbally aggressive, but abusive behavior is often more subtle.  Publicly shaming an employee for a mistake, absence, or other “offense,” as well as individually or collectively disparaging team members are clear signs of abusive management that are often ignored.  Abusive behavior must be differentiated from appropriate reprimands, respectfully and professionally delivered, and other disciplinary actions required of effective management.
     The Toxic Five provide a framework for understanding the conditions in which toxic cultures develop and persist.  What is now needed is an effective method of culture-building that prevents toxicity from spreading and provides an antidote for isolated cases that develop.
 
Models of Culture Development
     Various cultural frameworks have been developed; some by prominent academics or intellectuals, others by famous managers, and still others in relative obscurity.  The best model for any organization may be a hybrid of existing frameworks or a new approach that exploits its unique character.  A small sample of existing models is presented here for inspiration.

The Three-Legged Stool.  A rather simple model, the three-legged stool approach suggests that cultural development relies on resources, training, and accountability.  Each leg is fundamental to a healthy culture and easy to understand.
     Without the required resources, employees are unable to perform as expected, causing stress and dissatisfaction.  This begins the progression of deleterious effects discussed previously.  Unwillingness to provide necessary resources indicates an environment that is disrespectful to team members and may also be related to unethical or abusive behavior.
     Training provides the know-how that employees need to succeed.  It should consist of more than the technical aspects of a job, including appropriate responses to exposure to toxicity.  Team members must understand the behaviors required of virtuous cycles to effectively interrupt and replace vicious cycles.
     Every member of an organization must be held accountable for his/her actions and influence on workplace culture.  All individual contributors, managers, and executives must be held to the same standard for a healthy culture to endure.

Four Enabling Conditions.  The four enabling conditions were originally published as keys to effective teamwork.  Teamwork and culture are so intricately interwoven that considering these conditions as enablers of healthy culture is also valid.  They are:  compelling direction, strong structure, supportive context, and shared mindset.
     To be effective, an organization requires a “compelling direction.”  This can usually be discerned from stated values and goals that define where the organization intends to go and what paths to that destination are and are not acceptable.
     A “strong structure” enables the highest performance of which an organization is capable.  In this context, structure refers to membership in a group and the apportionment of responsibility within it.  Diverse backgrounds and competencies create a versatile team.  Careful consideration of workflows and assignment of responsibility supports efficient achievement of objectives.  A versatile, efficient organization can be said to have a strong structure.
     The “supportive context” needed to maintain a healthy culture is closely related to the resources of the Three-Legged Stool model.  It also incorporates an incentive structure that encourages cooperation and collaborative pursuit of objectives.
     It is particularly important – and difficult – to establish a “shared mindset” within a geographically dispersed organization.  Consistently “lived” values are critical to maintaining a shared mindset; all members must receive the same messages, treatment, and resources for it to survive.
     The four enabling conditions are prerequisites to a healthy culture, but they do not guarantee it.  One must never forget that teams are comprised of humans, with all the idiosyncrasies and perplexities that render team dynamics as much art as science.  For this reason, maintaining a healthy culture requires vigilance and dedication.

Three Critical Drivers.  In addition to the Toxic Five, the team at CultureX have identified the three “most powerful predictors of toxic behavior in the workplace.”  Slight modification of the terminology and viewpoint yields the “critical drivers” of culture:  leadership, social norms, and work design.  Implicit in the term is that these drivers can lead to healthy culture, if well-executed, or toxicity, if poorly executed.
     It is likely no surprise that leadership is consistently found to be the strongest driver of culture, be it in a positive or negative direction.  Leaders set expectations, whether consciously or inadvertently; team members mirror leaders’ behaviors, as they are understood to be “the standard.”
     Leaders throughout an organization provide examples of behavior for those around them.  In dispersed groups, this can lead to the development of “microcultures” that differ from other locations or the “corporate standard.”  The existence of a microculture can be beneficial, neutral, or unfavorable.  A toxic microculture is sometimes called a “pocket of toxicity.”  Once discovered, a pocket of toxicity must be contained and corrected to protect the entire organization.
     Behaviors are deemed acceptable when they are aligned with an organization’s social norms.  Norms are context-sensitive; what is appropriate in one setting may be unacceptable in another.  A leader’s behavior often establishes social norms, but a cohesive team can define its own that negate some toxicity that would otherwise infiltrate the group.
     Elements of work design can be modified to reduce employees’ stress and increase productivity and satisfaction.  Eliminating “nuisance work” from a person’s responsibilities is a clear winner, but is not as straightforward as it might first appear.  A job cannot always be customized to an individual; it must meet the needs of the organization regardless of who is performing it.  One person might be tortured by “paperwork,” while another is annoyed by the need to keep physical assets organized.  If every task that could be distasteful to any employee were removed, no work would get done!
     Instead, focus on eliminating “busy work” or nonvalue-added activities.  Employees are more likely to remain engaged when performing tasks they do not enjoy if they understand the value of the work.  Allowing flexibility in task performance or incorporating their input in the work design also increases engagement and satisfaction.
     While flexibility is desirable in task performance, clarity and consistency is necessary when it comes to roles and responsibilities.  Obviously, an individual needs to know the requirements of his/her own job, but understanding the roles of others is also important.  If support is needed, or a problem is discovered, each team member must know to whom it should be reported.  Ambiguous reporting structures, with intersecting hierarchies, “dotted-line” reporting relationships, and multiple “bosses” make it difficult for anyone to be confident in the correct course of action that will both achieve the desired outcome and satisfy reporting expectations.
 
     There is significant overlap in the models presented, though attention was drawn to little of it.  The remainder is left to the reader to recognize and implement in the fashion that best suits his/her circumstances.
 
Final Thoughts
     The preceding discussion focused on the spread of toxicity within an organization.  It is worth noting, however, that the deleterious effects of toxic culture, in many cases, are not confined to a single organization.  Toxic behaviors are often reciprocated or contagious, allowing the spread of toxicity to an organization’s supply chain, customers, and local or global community.  Every stakeholder is susceptible to the effects of toxicity that is allowed to permeate an organization.  Leaders’ diligence in maintaining a healthy culture protects every member of the organization and those with whom they interact.
 

     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] “Why Every Leader Needs to Worry About Toxic Culture.”  Donald Sull, Charles Sull, William Cipolli, and Caio Brighenti.  MIT Sloan Management Review; March 16, 2022.
[Link] “How to Fix a Toxic Culture.”  Donald Sull and Charles Sull.  MIT Sloan Management Review; September 28, 2022.
[Link] “The Secrets of Great Teamwork.”  Martine Haas and Mark Mortensen.  Harvard Business Review; June 2016.
[Link] “A Leg Up.”  Gary S. Netherton.  Quality Progress; November 2020.
[Link] “Does your company suffer from broken culture syndrome?”  Douglas Ready.  MIT Sloan Management Review; January 10, 2022.
[Link] “5 Unspoken Rules That Lead to a Toxic Culture.”  Scott Mautz.  Inc.; June 6, 2018.
[Link] “Stop These 4 Toxic Behaviors Before Your Employees Quit.”  Scott Mautz.  Inc.; September 28, 2016.
[Link] “Why You’re Struggling to Improve Company Culture.”  Dan Markovitz.  IndustryWeek; December 5, 2017.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Commercial Cartography – Vol. XI:  Materiality Matrix]]>Wed, 19 Apr 2023 04:30:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-xi-materiality-matrix     In common language, “materiality” could be replaced with “importance” or “relevance.”  In a business setting, however, the word has greater significance; no adequate substitute is available.  In this context, materiality is not a binary characteristic, or even a one-dimensional spectrum; instead it lies in a two-dimensional array.
     Materiality has been defined in a multitude of ways by numerous organizations.  Though these organizations have developed their definitions independently, to serve their own purposes, there is a great deal of overlap in both.  Perhaps the simplest and, therefore, most broadly-applicable description of materiality was provided by the GHG Protocol:
“Information is considered to be material if, by its inclusion or exclusion, it can be seen to influence any decisions or actions taken by users of it.”
     Recognizing the proliferation and potential risk of divergent definitions, several organizations that develop corporate reporting standards and assessments published a consensus definition in 2016:
“Material information is any information which is reasonably capable of making a difference to the conclusions reasonable stakeholders may draw when reviewing the related information.” (IIRC, GRI, SASB, CDP, CDSB, FASB, IASB/IFRS, ISO)
     The consensus definition is still somewhat cryptic, only alluding to the reason for its existence – corporate financial and ESG (Environmental, Social, Governance) reporting.  As much can be surmised from the list of signatory organizations as from the definition itself.
     A materiality matrix is a pictorial presentation of the assessments of topics on two dimensions or criteria.  It can be presented as a 2 x 2 matrix, such as that in Exhibit 1; slightly increased granularity is provided by a 3 x 3 matrix, as shown in Exhibit 2.  Granularity at its extreme results in a conventional two-dimensional graph, such as that in Exhibit 3.
     As seen in this set of examples, axis titles can vary.  The choices made may be dependent upon the company’s common language, the purpose of the assessment, or type of report to be prepared.  For simplicity and consistency, the following convention will be followed throughout the following presentation:
            Horizontal (“X”) axis – Impact on Business
            Vertical (“Y”) axis – Importance to Stakeholders.
This phraseology is equally applicable to financial and ESG reporting, simplifying implementation.
 
Types of Materiality
     The generic definitions of materiality presented in the introduction refer to “single materiality.”  The two types of single materiality, commonly labeled “financial” and “impact” (ESG or sustainability) materiality, have already been mentioned.
     “Double materiality” refers to information relevant to both financial and impact materiality reports.  The degree of materiality may differ between types for a particular topic, and often does.  Nonetheless, if information is deemed material in both contexts, it is said to exhibit double materiality.
     Different users of reported information may require varying levels of detail to apply it appropriately.  This situation has been dubbed “nested materiality.”
     “Core materiality” has been introduced as an umbrella term for three common material matters – greenhouse gas emissions, labor practices, and business ethics.  The term was coined as a reflection of the nearly universal materiality of these topics across varied industries.  Each represents one component of ESG – environmental (GHG), social (labor), and governance (ethics).
     “Extended materiality” considers impacts on portions of the value chain outside the assessor’s control.  Understanding upstream (i.e. supply chain) and downstream (i.e. marketplace) impacts better informs one’s own materiality assessments.
     “Dynamic materiality” is the term used to describe the changeable nature of materiality over time.  It is the reason that materiality assessments should be repeated periodically.  Materiality is in flux; previous assessments may no longer be valid.
 
Materiality Assessment
     Conducting a materiality assessment is a structured exercise involving a variety of people throughout an organization.  Descriptions of the process vary, but there is a high degree of agreement about the content and purpose; a seven-step process is presented here.  The steps are:  Prepare, Brainstorm, Categorize, Assess, Plot, Validate, and Publish.  A description of each follows.
 
Prepare.  Preparation for a materiality assessment is similar in many ways to that for various other types of projects.  Much of the effort required involves defining the assessment to be conducted.  Definition includes, but need not be limited to:
  • purpose of assessment – e.g. annual report to shareholders, strategy session, etc.
  • scope of assessment – financial, ESG, or both
  • assessment boundaries – e.g. core or extended materiality
  • team members, roles, and responsibilities
  • stakeholders – internal (directors, employees, unions, etc.) and external (suppliers, customers, investors, neighbors, advocacy groups, etc.)
  • process to be used, including decision-making guidelines
  • threshold values and other limits to materiality (e.g. social, environmental, economic impacts)
  • axis scales and titles and format of resulting matrix.
 
Brainstorm.  This step can be conducted in a brainstorming session as typically described, or less literally, as a period of information gathering.  All available sources of information should be considered to compile a list of potentially material topics.  This can include existing mechanisms of communication, such as a website inquiries, sales and marketing interactions, shareholder calls, customer service call centers, or other established channels.  Surveys, questionnaires, or other information-gathering techniques can also be used specifically to support a materiality assessment.  The methods and tools used should be identified in the assessment process definition.
     Topics that trends suggest may be material in the future should also be captured, even if they do not currently meet the criteria.  Doing so will facilitate future assessments, as previous assessments serve as a key source of potential topics.  Inexperienced members of an assessment team may be unsure where to look for potential materiality.  Fortunately, there are tools available to assist in this research, including:These resources are only aids to the materiality search; their suggestions should not be interpreted as universal or comprehensive.
 
Categorize.  Arrange potential topics in groups that facilitate further research and assessment.  Groups could reflect the department, region, or other division to which each topic is most relevant.  If a different set of categories is a better fit with the team and organization structure, define the preferred classification scheme in the assessment process to ensure all team members view the information through the same lens.  See Vol. VII:  Affinity Diagram (8Feb2023) for additional guidance.
 
Assess.  Evaluate each topic on the dimensions of Impact to Business and Importance to Stakeholders according to the scales and scope defined during preparation.  Conduct additional research, if required, to quantify the impacts as accurately as possible.  It is imperative that each topic be assessed consistently in order for the materiality matrix to accurately represent the state of the business.
     The International Integrated Reporting Council (IIRC) guidelines for conducting an assessment include several perspectives from which each topic should be viewed and other factors that may influence the magnitude of impacts.  Assessment teams are advised to consider both quantitative and qualitative factors.  Quantitative factors may be direct measures of financial impact, but could also be represented by percentage changes in sales, yield, or other performance metric.  Qualitative factors are those that “affect the organization’s social and legal licence [sic] to operate,” including reputation and public perception.  These can be effected by the discovery of fraud, excessive pollution, workplace fatalities or illness, or other violations of “social contracts.”
     Both the area and time frame of impacts should be considered.  The area of an impact refers to it being internal or external to the organization.  Internal impacts include matters involving the continuity of operations and other disruptions that effect the organization directly.  External impacts include matters that effect stakeholders who then exert pressure on the organization in various ways.  These may include reputational damage, higher cost of capital, or the availability of required resources.
     An impact may have a short-, medium-, or long-term effect on an organization.  Short-term impacts are immediate and usually recoverable, such as an accident or spill.  The definitions of medium- and long-term vary among industries, but “average” or typical time frames used are 3 – 5 years and 5+ years, respectively.
     Medium-term impacts may include resource depletion, contract or license expiration, or other foreseeable change in an organization’s operating environment.  Long-term impacts are often associated with technology development and related regulations, such as renewable energy, electrification of transportation, and artificial intelligence.  The longer the time horizon, the more difficult it is to predict the nature and magnitude of the impact that will be experienced and, therefore, stakeholders’ perceptions.
     The IIRC’s recommended perspectives are:
  • Financial – expressed in monetary terms or financial ratios, such as liquidity or gross margin.
  • Operational – production volume and yield, market share and customer retention, etc.
  • Strategic – “high-level aspirations” such as market leadership, impeccable safety performance record, product development plans, etc.
  • Reputational – evaluation of incidents and the organization’s responses to them:  Were the events foreseeable, preventable?  Were the events caused by negligence or incompetence?  Were the responses timely, appropriate, and effective?  Was the organization forthcoming and transparent regarding causes, responsibility, and recovery plans?
  • Regulatory – an organization’s record of compliance and ability to comply with foreseeable future regulations.
     Viewing an organization from each of the perspectives described reveals potential impacts with financial and non-financial, or direct and indirect, effects.  There is significant overlap in the perspectives, particularly financial, that can be beneficial to an assessment.  Exploration from one perspective can quite naturally lead to taking another perspective; a more thorough and well-reasoned assessment often results.  The table in Exhibit 4 presents an example summary of the factors effecting mine safety.
     Compare each topic’s assessment to defined threshold values to determine which will be included in the materiality matrix and related reports.  If thresholds have been established by Enterprise Risk Management, they should also be applied to the materiality assessment.
     Various thresholds can be defined, both financial and non-financial.  The IIRC references several:
  • Monetary amount” – Carbon Collective suggests income thresholds equal to 5% of pre-tax profit or 0.5% of revenue.  Alternatively, thresholds equal to 1% of total equity or 0.5% of total assets are suggested.  Appropriate monetary thresholds will vary from one organization to another, based on an array of factors.
  • Operational effect” – lost or interrupted production; e.g. 5% of planned volume cannot be delivered on schedule.
  • Strategic – effect on organization’s ability to follow strategic plan; e.g. project schedules delayed more than 60 days.
  • Regulatory – the point at which compliance costs exceed the organization’s ability or willingness to continue operations; e.g. 15% increase over current year.
  • Reputational – could be represented by several measures, such as Customer Satisfaction Index, Net Promoter Score, investor confidence surveys, social media research, etc.; e.g. 20% negative response on primary social media platform.
     Though included in the Assess step of the process for presentation in the context of their application, selection of thresholds requires significant attention during the Prepare step as the assessment process is defined.  The selection of team members and materiality thresholds are intricately linked.  A wide range of experience is needed on the team to select appropriate thresholds as well as evaluate topics with respect to several factors that determine if the thresholds have been exceeded.
 
Plot.  Create a pictorial record of the assessment in the format(s) defined during preparation.  The number of material topics, range of impacts, documentation standards, and other organizational norms may influence the format of the materiality matrix.  In the example shown in Exhibit 5, UPS differentiates between impact areas and trends that have significant influence on the business and, therefore, must be watched closely.  As shown in Exhibit 6, Unilever has chosen to identify five categories that cover a range of ESG topics.  Note that both present assessment results on a qualitative scale only; businesses are loath to publicly divulge financial information, lest competitors gain an advantage.  Unilever has even excluded items of “low” materiality – those that did not exceed a defined threshold – from the matrix.
     Multiple matrices can be created for different purposes, such as one for a financial report, one for a sustainability report, and a composite matrix for a shareholder report.  Exhibit 7 provides an example of separate matrices for reporting and strategy decisions.  Combining them results in the composite matrix shown in Exhibit 8.  If each marker on the graph were identified by a label, as is the “GHG Emissions” example, the matrix would be cluttered and difficult to read.
     To prevent visual overload, an alternative format, such as that shown in Exhibit 9, could be used.  In this example, each topic is identified by a number and its context – financial or sustainability – by the color of the marker.  An expanded legend or accompanying table (not shown) identifies each topic.  The information contained in an expanded legend, most often, is a short name (e.g. “GHG Emissions” in the previous example), while an accompanying document can contain detailed descriptions of the company’s investments, strategy, and other plans.  A public presentation is likely to contain the former, while the latter would be prepared for a meeting of executives or directors.
Validate.  Engage both internal and external stakeholders to assess the validity of the materiality matrix.  This step can be as simple as an informal “gut check,” where stakeholders opine on the absolute and relative ratings of material topics, accepting the matrix if it “feels right.”
     More-sophisticated evaluations involve comparisons of the assessment team’s ratings with those based on independently-acquired data.  The team may be challenged to defend its ratings by presenting supporting data and demonstrating the assessment process.  The objective of such challenges is to evaluate and confirm the strength of evidence and, thus, justify a topic’s position in the matrix.
     When additional data and critical review require it, adjustments are made to ratings and the matrix is updated.  Upon completion of the review and validation to stakeholders’ satisfaction, the materiality matrix, accompanying document, and report are finalized.
 
Publish.  The level of detail included in published reports varies, depending on the intended audience.  As mentioned previously, public disclosures may be limited to an overview, while internal management documents contain far more information, in both breadth and depth.  Typical components of a materiality assessment report include:
  • description of the assessment process and decision-making rules
  • results of the assessment (i.e. the matrix)
  • discussion of priorities and plans derived from or influenced by the assessment
  • the “shelf life” of the assessment (i.e. when it will be repeated).
Any detail relevant to the definition of the assessment, created in the preparation step, is a candidate for inclusion in a final report.  Information that clarifies the logical path to the ratings and conclusions should be included, while extraneous, distracting, or confusing details can be omitted.  Reactions to the report inform the team when its valuations of information are misaligned with those of various audience segments.
 
Materiality and Strategy
     “Strategy” is a very broad term, often clarified by a modifier such as “operations,” “marketing,” or “investment.”  Each of these, and more, can be influenced by a materiality assessment.  Examples of decisions that may be effected include:
Operations
  • Facility location, size, and construction
  • Modes of transportation
Marketing
  • Messaging compatible with expressed interests of consumers
  • Product development to satisfy changing preferences
Investment
  • Transitioning from disfavored industries, funds, etc. to those stakeholders deem worthy
  • Accelerating adoption of “green” technologies
     Strategy development, broadly speaking, consists of three key components – the Three Is:  impact, importance, and influence.  The first two, impact and importance, are addressed by the materiality assessment and can be read directly from the materiality matrix.
     The third component, influence, refers to an organization’s ability to effect the impact of a material topic on its stakeholders.  If an organization lacks capability, its strategy may focus on risk management, development of capabilities needed to reduce the impact, technology advancement that modifies the materiality landscape, or other method of compensation.
     The key takeaway is this:  even a high-impact topic that is highly salient to stakeholders (i.e. upper right of the materiality matrix) may not be a high priority; the lack of influence simply renders effort futile.  Presenting this accurately and transparently is crucial to maintaining stakeholder trust and support.
     Another facet of influence is an organization’s ability to effect stakeholders’ perceptions of a topic.  If stakeholders’ assessments of materiality are based on faulty research, corrupted data, etc., correcting the record is perfectly noble.  However, the potential for nefarious use of influence also exists.  For example, ethically-challenged individuals may choose to inappropriately downplay a material topic, mislead stakeholders, or divert attention from a management failure.  Even when used righteously, the practice may be considered manipulative, fostering skepticism and resentment.  It is mentioned here because the presentation would be remiss without it; its inclusion is intended to serve as a strong warning.  Influence should be used in this way rarely and with extreme caution.
 

     The unassuming appearance of a materiality matrix belies the intensity of research and analysis required to make it useful.  It also understates its utility as a strategy-development and communication tool.  The uninitiated may pay it little attention, but for those who see Superman in a newsroom, the insight it can provide is enormous.
           
            For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I: An Introduction to Business Mapping (25Sep2019).
 
References
[Link] “How to make your materiality assessment worth the effort.”  Mia Overall.  Greenbiz; August 15, 2017.
[Link] “The Strategic Value of ESG Materiality Assessments.”  Conservice ESG.
[Link] “Materiality Assessments in 4 Simple Steps.”  Jason Dea.  Intelex; September 2, 2015.
[Link] Materiality Tracker.
[Link] “Sustainability Materiality Matrices Explained.”  NYU Stern Center for Sustainable Business; May 2019.
[Link] “Practitioners' Guide to Embedding Sustainability.”  Chisara Ehiemere and Tensie Whelan.  NYU Stern Center for Sustainable Business; March 13, 2023.
[Link] “The essentials of materiality assessment.”  KPMG International, 2014.
[Link] “Dynamic, Nested and Core materialities - Materiality Madness?”  Madhavan Nampoothiri.  Nord ESG; July 25, 2022.
[Link] “From 0 to Double – How to conduct a Double Materiality Assessment.”  Sebastian Dürr.  Nord ESG; August 2, 2022.
[Link] “Dynamic Materiality: Measuring What Matters.”  Thomas Kuh, Andre Shepley, Greg Bala, and Michael Flowers.  Truvalue Labs, 2019.
[Link] “The materiality madness: why definitions matter.”  Global reporting Initiative; February 22, 2022.
[Link] “Embracing the New Age of Materiality: Harnessing the Pace of Change in ESG.”  Maha Eltobgy and Katherine Brown.  World Economic Forum; March 2020.
[Link] “Materiality analysis and its importance in CSR reporting.”  Altan Dayankac.  DQS Global; November 3, 2022.
[Link] “Materiality Concept.”  Brooke Tomasetti.  Carbon collective; March 8, 2023.
[Link] “Materiality:  Background Paper for Integrated Reporting.”  International Integrated Reporting Council; March 2013.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Commercial Cartography – Vol. X:  Work Balance Chart]]>Wed, 05 Apr 2023 04:30:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-x-work-balance-chart     The work balance chart is a critical component of a line balancing effort.  It is both the graphical representation of the allocation of task time among operators, equipment, and transfers in a manufacturing or service process and a tool used to achieve an equal distribution.
     Like other tools discussed in “The Third Degree,” a work balance chart may be referenced by other names in the myriad resources available.  It is often called an operator balance chart, a valid moniker if only manual tasks are considered.  It is also known as a Yamazumi Board.  “Yamazumi” is Japanese for “stack up;” this term immediately makes sense when an example chart is seen, but requires an explanation to every non-Japanese speaker one encounters.  Throughout the following presentation, “work balance chart,” or “WBC,” is used to refer to this tool and visual aid.  This term is the most intuitive and characterizes the tool’s versatility in analyzing various forms of “work.”
     A work balance chart can be used to streamline manufacturing or service operations.  Any repetitive process that involves manual operations, automation, and transfers of work products between operators or equipment can benefit from work balance analysis.  Applications in manufacturing are most common; service providers should take heed of the potential competitive advantage such an analysis could bring to light.
 
Assumptions
     To simplify the initial presentation, preparation for and construction of a work balance chart is subject to several assumptions, including:
  • Activity times are known and constant.
  • Takt time is known and constant.
  • Target cycle time equals takt time.
  • Process resources are dedicated (not shared with other processes).
  • Process uses one-piece flow or pull system (no batching or buffering).
  • A single product is manufactured or service performed (no mix).
  • Process is always fully functional (100% availability).
  • Process achieves 100% acceptable quality (no scrap or rework).
  • Existing process evolved with limited analysis.
The effects of these assumptions on work balance analysis are discussed in the “Adjustments for Reality” section.  Once the basic principles of work balance analysis are understood, using an idealized process, assumptions that are no longer valid should be removed.  The resultant WBC will more accurately represent operations, providing opportunities for more effective changes and higher performance.
 
Preparation
     Before an analysis can begin, its boundaries must be established.  Define the process completely, including its start point, end point, and each step between.  Creating a flow chart is a convenient way to document the process definition for reference during analysis.
     Though known, constant activity times are assumed, this information must be collected and organized.  If not already available, a precedence diagram should be created; referencing preexisting diagrams is much more efficient than creating them “on the fly” or, worse, foregoing them (“running blind”).  Activity times can be added to reference diagrams to reduce the number of documents needed during analysis.  Summing the activity times for every step in the process yields the total cycle time or TCT.
     Determine the takt time – the maximum cycle time at which the process is capable of meeting customer demand.  Takt time is calculated by dividing the available work time in a period by the customer demand in that period:
For manufacturing operations, the shipping frequency provides a convenient period for takt time calculation.  In this framing,
The period used for service operations is often a shift, a day, or a week, depending on the duration of the service analyzed.  It is possible, however, that another, “nonstandard” time period is appropriate; its selection is left to the judgment of the analysis team.
     To determine the optimum number of operators, or stations, for the idealized process, divide the total cycle time by the takt time and round up to the nearest integer:
Constructing the WBC
     A work balance chart, like several other diagrams, can be constructed manually or digitally.  Many diagramming efforts benefit significantly from manual construction; speed of development, the cross-section of inputs, and size of the workspace (“canvas”) are typical advantages.  The nature of a WBC, however, often shifts the advantage to digital construction for experienced practitioners, as a description of both methods illuminates.
     Manual construction of a WBC is most valuable for training purposes; advantages in this context include:
  • Tactile engagement reinforces the concept that physical rearrangement may be required to balance a workload.
  • Manual manipulation allows rapid reconfiguration and visual confirmation of completeness (all components accounted for).
  • Physical models are less abstract than digital tools.
  • No computer skills are required.
  • Software may induce unintended limitations on exploration and experimentation that solidifies students’ understanding of concepts.
Materials needed to manually construct a WBC include paper, scissors, ruler, and writing instrument.  An alternative method involving hand-drawn scales on a whiteboard is too imprecise and will not be detailed here (further explanation is probably unnecessary, anyway).
     On a piece of paper, draw a graph with an appropriate time scale, chosen based on the takt time and task times, on the vertical axis.  Along the horizontal axis, place labels for each station or process in sequence from left to right.  For each task, cut a strip of paper to length in proportion to its duration; use the scale established on the graph.  To improve legibility and maintain the proper scale, the graph and task strips can be printed on a computer; these will look similar to the example in Exhibit 1.  Doing so reduces the time required, maintaining focus on the important aspects of the WBC; the ability to draw time scales accurately is the least relevant to a work balancing effort or training.
     On the blank graph, draw a horizontal line at the takt time or target cycle time (they may be different – more on this later).  Use a bright or contrasting color to ensure that the line is easily visible.  Place the task strips on the graph, “stacking” them above the corresponding station or process label.  An example of what this might look like is shown in Exhibit 2.  The example presents the current state of production that clearly exhibits a huge imbalance in the workload.
     To balance the workload among the available stations, rearrange the task strips, targeting equal total task times in each station.  Each task movement is subject to restrictions established by the precedence diagram for the process being analyzed.  The future state of production, with a balanced workload, may look like that presented in Exhibit 3.
     A task eligibility chart can also be created; in it, information contained in the precedence diagram or table is reorganized to be more easily applied directly to the work balance effort.  To see how an eligibility chart is created and utilized, consider the example precedence diagram and table in Exhibit 4 and the derived eligibility chart in Exhibit 5.  In this example, there are 12 tasks to be completed to meet a 57.6 s takt time (the presumed target cycle time).  The number of stations needed is determined by dividing the total task time by the takt time:  252 s/57.6 s = 4.375.  Rounding up, the work is to be balanced among five stations.
     For a task to be eligible for assignment to a station, it must meet all precedence requirements.  A commonly used “rule of thumb” is to assign the eligible task with the longest duration that will not cause the total station time to exceed the target cycle time.  In the example presented, the tasks are assigned in alphabetical order, but this need not be the case; a process with more parallel tasks will have more “mixing” of the task sequences.  The work balance chart in Exhibit 6 provides the graphical representation of the eligibility chart assignment information.
     The WBC in Exhibit 6 was generated in a spreadsheet program; compare it to Exhibit 3.  The output of the manual (hybrid, really) process depicted in Exhibit 3 is functional, but imprecise and aesthetically unsatisfying for presentation.  This is true despite the use of a computer to generate the graph and task strips.  With comparable effort, a spreadsheet template can be created to generate an aesthetically pleasing WBC that automatically adapts to task rearrangement.  In subsequent balancing efforts, use of the template is much more efficient than the manual process.  For this reason, experienced practitioners are encouraged to construct WBCs digitally.  Once the concepts of proper execution are well-understood, physical manipulation no longer adds sufficient value to justify an inefficient process.
     A spreadsheet template can also be created to generate a horizontally “stacked” WBC, such as that shown in Exhibit 7 for the eligibility example.  Exhibit 6 and Exhibit 7 present the same information in slightly different formats and with different connotations.  The vertical “stacks” of Exhibit 6 may evoke the concept of “piling on” or putting an operator under increasing load as additional tasks are assigned to a station.  The horizontal bars of Exhibit 7 tend to be less evocative, portraying the inevitable, and mostly unobjectionable, passage of time.  The notions of “workload” and “timeline,” though equivalent in this context, can elicit very different reactions from varying audiences.  Both formats provide accurate, acceptable presentations of the work balance; the choice between them is made for aesthetic reasons.
Adjustments for Reality
     The examples in the previous section allude to use of work balance charts to improve an existing process (see Exhibits 2 and 3) and for process planning (see Exhibits 4, 5, and 6).  The information that can be known and that which must be estimated differs between these two applications.  For example, data from a time study can be used to balance an existing process, but task times must be estimated in a preliminary process plan (i.e. prior to building equipment).
     Once performance data is available for a new process, the workload may require rebalancing.  This is often done assuming a constant activity (task) time; the average of recorded cycles is typically used.  However, this may not be the most effective choice; an “accordion effect” of varying task durations may induce erratic fluctuations in the workflow.
     These fluctuations can be accommodated in the target cycle time.  Activity times are typically normally distributed; this can be verified in the performance data prior to implementing the following strategy.  Consider a hypothetical task for which time study data reveal an average duration of 30 s and standard deviation of 4 s [μ = 30, σ = 4].  To smooth the workflow, the task time “standard” is set to encompass 90% of cycles.  As the normal distribution for this example, Exhibit 8, shows, this task duration is set at 35.1 s [P(x < 0.9); z = 1.282].
     Variation in activity time is unavoidable for manual tasks; thus wait time (waste) will exist in a closely-coupled process.  Monitoring and evaluation of productivity and operators’ frustration with the system is required to choose an appropriate target cycle time.  Designing a system to operate at the average task time ensures that only 50% of cycles will meet the design criterion!  The system output will meet design intent only when the average task time is achieved at every step of the process.
     This tension is relieved if physical buffers are built into the system or output from stations is batched; these practices have an equivalent effect.  Variation in task time does not influence downstream operations; the average task time is the most relevant metric for most systems of this type.
 
     Takt time may also vary; this is the nature of seasonal demand, for example.  Periods of reduced demand can be accommodated in a few ways:
  • Produce at the average takt time (not possible for services) regardless of the current demand.  Carried inventory compensates for periods of production/demand mismatch.
  • Limit the operating schedule to match total output to demand (system designed for maximum demand).
  • Reduce the number of operators and rebalance the workload for the higher takt time.  A single process may use several WBCs to match output to varying demand.  Seasonal demand may be satisfied with “winter work balance,” “summer work balance,” and “spring/fall work balance” configurations.  Any number of WBCs can be created to manage fluctuating demand.
 
     Several of the assumptions upon which the WBC development presentation was based are interconnected; it is difficult to discuss one without invoking others.  The assumption that “target cycle time equals takt time” may be removed for many reasons, some of which were presented in other assumptions.  If resources – personnel, equipment, etc. – are shared with another process, the target cycle time may be reduced so that sufficient time is available to meet the demand for both processes.  This creates a situation similar to introducing product mix into a process, precluded from this discussion by another assumption.  This topic is best left to a future installment; adequate exploration is beyond this scope of this presentation.
 
     The assumptions of “100% process availability” and “100% acceptable quality,” like that of constant activity times, were made to avoid confusing those new to line balancing.  They must now be adjusted, however, to develop realistic expectations of process performance.
     Quality and availability are two legs of the OEE (Overall Equipment Effectiveness) “stool.”  Achieving 100% performance in both measures for an extended period of time is unlikely for a system of any sophistication.  Therefore, the target cycle time must be adjusted to accommodate the actual or anticipated performance of the system.
     The reliability of a system effects its availability and directly influences the numerator of the takt time calculation.  Output of unacceptable quality is accounted for, indirectly, in the denominator by effectively increasing the demand (an additional unit must be produced to replace each faulty unit).  The modified calculation can be presented as:
     The third leg of the OEE stool is productivity, to which the task time variation discussion and Exhibit 8 allude.  An example occurrence of these three losses is depicted in Exhibit 9.  Adjusting target cycle time based on OEE is an alternative method (“shortcut”) of compensating for these losses when they are known or can be estimated with reasonable accuracy.  For this method, the target cycle time is calculated as follows:
These calculations are, once again, based on the assumption that resources are dedicated to a single process.  For process planning, the target cycle time can be calculated using a “world-class” OEE of 85%, an industry average, or other reasonable expectation.
     The final assumption stated involves continuous improvement (CI) efforts.  Processes evolve to accommodate changing customer requirements, material availability, and other influences on production.  Many times, process changes are implemented with incomplete analysis, whether due to urgency or oversight, resulting in a system that is inefficient and unbalanced.  Learning curves may also effect tasks differently; experience may improve performance in some tasks more than others.  Including this assumption in the discussion serves as a reminder that line balancing is a CI effort involving both efficiency and arrangement.
 
Additional Notes on Balancing
     If no satisfactory balance can be found without exceeding the target cycle time, there are several approaches available, including:
  • Increase operating time, adjusting takt time and target cycle time to an achievable production rate.
  • Increase the number of stations, lowering total station time below the target.
  • Implement parallel processing of the longest-duration tasks by adding equipment and/or operators.
  • Conduct a detailed time & motion study to discover opportunities for increased productivity (e.g. left-hand/right-hand operations, improved sequencing).
  • Increase operating speed of equipment via refurbishment, replacement, or new technology.
  • Balance all stations except the last.  The excess (waiting) time in the final station can be used to restock materials, perform a task in another process, or other creative use.  It also serves as incentive to further improve the system for efficient, balanced operation.  Alternatively, it represents capacity for additional content, such as new product features or service components.
  • Increase sale price to lower demand to acceptable rate while maintaining profitability.
 
     The digitally-generated WBC examples (Exhibits 6 and 7) were created with “traditional” spreadsheet data presentation and charting tools.  Pivot tables can also be used to organize data and generate charts; however, their use requires additional skills and manual updates of charts.  If one is sufficiently skilled and comfortable in the use of pivot tables, it is a viable option, though the advanced users that benefit from their use is probably a small fraction of practitioners.
 
Notes on Simulation
     Simulation software can be used to facilitate line balancing and other operational assessments.  Work balancing projects like those described here will usually not benefit greatly from the additional effort that simulation requires; highly sophisticated systems may warrant it, however.  A complex product mix, highly variable task durations, complex maintenance schedules, and unpredictable demand may complicate the analysis to a sufficient degree to justify the use of simulation software.
     A spreadsheet program can also be useful for “what if” type experimentation and is sufficient for most line balancing projects.  Monte Carlo simulation, distribution analysis, and other simple functions can also be performed in a spreadsheet.
 
 
     Approaching the limits of production capability requires the most complete and accurate information possible.  It is imperative to account for variability in human task performance, equipment reliability, quality attainment, predictability of demand, and other factors in process planning and development.  Increasing efficiency and improved work balance are circular – each supports the other – and should be pursued in conjunction whenever feasible.
 
     For additional guidance or assistance with line balancing, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I: An Introduction to Business Mapping (25Sep2019).
 
References
[Link] Line Balancing Series.  Christoph Roser. All About Lean, January 26, 2016.
[Link] “The Balancing Act:  An Example of Line Balancing.”  Brian Harrington.  Simul8.
[Link] “Operator Balance Chart.”  Lean Enterprise Institute.
[Link] “Understanding the Yamazumi Chart.”  OpEx Learning Team; July 19, 2018.
[Link] “What Is Line Balancing & How To Achieve It.”  Tulip.
[Link] “Lean Line Balancing in the IT Sector.”  Rupesh Lochan.  March 9, 2011; iSixSigma.
[Link] Normal Distribution Generator.  Matt Bognar.  University of Iowa, 2021.
[Link] Normal Distributions.
[Link] The New Lean pocket Guide XL.  Don Tapping; MCS Media, Inc., 2006.
[Link] The Lean 3P Advantage.  Allan R. Coletta; CRC Press, 2012.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Commercial Cartography – Vol. IX:  Precedence Diagram]]>Wed, 22 Mar 2023 04:00:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-ix-precedence-diagram     A precedence diagram is a building block for more advanced techniques in operations and project management.  Precedence diagrams are used as inputs to PERT and Gantt charts, line balancing, and Critical Path Method (topics of future installments of “The Third Degree.”)
     Many resources discuss precedence diagramming as a component of the techniques mentioned above.  However, the fact that it can be used for each of these purposes, and others, warrants a separate treatment of the topic.  Separate treatment is also intended to encourage reuse, increasing the value of each diagram created.
     A discussion of the nature of task dependencies is a useful preface to one on precedence diagramming.  While it is conceivable that a useful precedence diagram could be created without a deep understanding of dependencies, it would be consequentially suboptimal.  If dependencies are not well-understood, contingency planning and decision-making are much less effective.  Task, or activity, dependencies are defined by two binary characteristics:  mandatory vs. discretionary and external vs. internal.
     A mandatory task dependency often involves a physical limitation; i.e. it is not possible to perform Task B until Task A is complete.  For example, it is not possible to frame a house until the foundation has been poured, or to roof it before it has been framed.  It may also involve a legal, regulatory, or contractual obligation; i.e. the “tasker” – the individual, group, or organization performing the task – is bound by law or agreement to adhere to a specified task sequence.  Continuing the house-building example, inspections of electrical and plumbing installations must be complete, to verify compliance with building codes, before sheetrock can be hung.  A contractual mandatory dependency is created when the bank financing the construction requires specified milestones to be reached and approved before subsequent funds are released.
     A discretionary dependency is a procedural choice defined by the tasker or customer.  It often represents an accepted best practice, such as the most efficient use of resources to complete a given series of tasks; it could also simply be a preference.  A builder may prefer to paint all rooms before hanging any wallpaper, but a shortage of paint for the living room need not delay wallpapering of the dining room.  Discretionary dependencies reflect desires, but flexibility to respond to changing circumstances is retained.
     An external dependency exists where a “non-project” milestone must be reached before a project activity can begin.  In the example above, electrical and plumbing inspections are external activities that must be complete before drywalling (internal activity) can begin.
     An internal dependency involves only project work and milestones.  Stated another way, internal dependencies are relationships between activities within the tasker’s control.  As such, activities with internal dependencies may be subject to expediting efforts.
     Possible combinations of these characteristics define four dependency types:
  • mandatory external – the tasker has little to no influence on the achievement of the required milestone.  The electrical and plumbing inspection required prior to drywalling, as mentioned previously, is a mandatory external dependency.
  • mandatory internal – the tasker cannot modify the requirement, but can influence when the milestone is reached.  Packaging of a product prior to shipping is a mandatory internal dependency.
  • discretionary external – the tasker can choose to modify a task sequence or milestone requirement despite the non-project work involved.  For example, a third-party assessment should be completed before project activities begin.  However, failure to reach the non-project milestone (completed assessment) does not prevent commencement of project work.  Nothing prevents a prospective buyer from making a real estate purchase offer prior to receiving an appraisal of the property, though it is clearly advisable to wait for it in most situations.
  • discretionary internal – the tasker can exert the greatest influence on the activity sequence and milestone achievement by modifying how and when project activity is conducted.  A product development plan may call for finite element analysis (FEA) to be complete before a prototype is built.  If multiple prototypes are planned, the first may be built before the design is finalized to expedite activities less dependent on the FEA results, such as aesthetic assessments or packaging trials.
An obligatory 2 x 2 matrix summarizing the four types of task dependency is provided in Exhibit 1.
     Understanding dependency aids decision-making when adapting project execution to changing circumstances.  To establish precedence, a temporal component must be added; the temporal information is the key component of a precedence diagramPrecedence describes the logical relationship between predecessor and successor activities in a project or process.  There are four possibilities:
  • Finish-to-Start (FS)predecessor activity must be complete to begin successor activity.  Examples in the dependency discussion above were described as FS constraints; a sequential series of tasks is common and intuitive.
  • Finish-to-Finish (FF)predecessor activity must be complete to complete successor activity.  Drywall spackling must be complete to finish painting the walls.
  • Start-to-Start (SS)predecessor activity must begin to begin successor activity.  Mortar mixing must begin in order for bricklaying to commence.  Both Lights! and Camera! must begin before the Action!! can commence.
  • Start-to-Finish (SF)predecessor activity must begin to complete successor activity; an uncommon precedence relationship.  New mobile phone service must be activated before the old service can be disconnected to avoid interruption of service.
     A graphical representation of each precedence relationship is provided in Exhibit 2Exhibit 3 demonstrates how activities can be shown to be subject to multiple precedence relationships.
     Precedence diagrams can be drawn in two pictorial formats – Activity on Node (AON) and Activity on Arrow (AOA); both are types of network diagram.  In an AON diagram, the task or activity descriptions are placed at the nodes and the arrows represent precedence relationships.  The example in Exhibit 4 has no precedence identifiers on the arrows; therefore, it is assumed that only FS constraints exist in this task sequence.
     In an AOA diagram, activity descriptions are placed on the arrows and the nodes serve as milestones.  As shown in Exhibit 5, an AOA diagram may require “dummy” activities to represent additional precedence relationships.  In this example, the dummy activity is added to show that both Activity A and Activity B are predecessors of Activity C.  Again, a purely sequential execution is assumed because no precedence notation has been used; FS is the default.
     A third method of presenting precedence relationships is in tabular format.  An example precedence table, created by a tennis tournament planner, is shown in Exhibit 6.  Each activity is described and assigned a code to simplify references to it.  Predecessor activities are then identified by assigned codes.  A pictorial diagram can be generated from the information contained in this table in either AON format (Exhibit 7) or AOA format (Exhibit 8).  The reverse is also true; precedence information can be transferred from a pictorial diagram to a table.
     The choice of format(s) to use is usually a simple preference.  Many find the AON diagram intuitive and easy to use, while dummy activities may be more difficult to process rapidly.  An experienced practitioner may choose to forego a graphical diagram; it is a simple matter to enter the information in a spreadsheet, but graphical capabilities may be limited or cumbersome.  If one can process the information with sufficient ease in tabular format, a graphical diagram is unnecessary.
     The diagrams in Exhibit 7 and Exhibit 8 use the codes assigned in the table of Exhibit 6 rather than full activity descriptions.  The subscript number attached to each activity code is its estimated duration, found in the rightmost column of the table.  Durations are not required on precedence diagrams and are normally added only when advanced techniques, mentioned in the introduction, are employed.  The use of estimated durations will be discussed in future installments when these techniques are explored.
 
     At times in this presentation, constraint has been used as a synonym for precedence relationship.  This is a valid substitution, as precedence requirements create constraints on the execution of a task sequence, limiting flexibility available for the tasker to exploit.
     Some resources refer to precedence relationships, as defined here, as dependencies; the performance of one task is dependent on the performance of another.  If the context is understood, the overlapping terminology is not catastrophic.  Differentiating between dependency and precedence and including the discussion here was chosen because it seemed the best fit for the information.  Both are fundamental building blocks of the advanced techniques to be presented later.
 
     Precedence diagrams are used to document production and construction processes, in project management, and in event planning.  One could even be used to tame the chaos of a busy family’s daily life.  Managing soccer games, band practice, visits to the veterinarian, and PTA meetings may be less daunting if these activities are clearly sequenced in advance.
     Readers are encouraged to experiment with different presentation formats and practice identifying dependencies and precedence relationships.  Planning daily activities is a judicious use of a new tool.  Ideally, the first application of any technique is a low-risk endeavor, providing a comfortable learning curve, limited potential consequences, and a solid foundation on which to build skills with advanced techniques.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I: An Introduction to Business Mapping (25Sep2019).
 
References
[Link] PMBOK® Guide - Sixth Edition.  Project Management Institute, 2017.
[Link] “PDM – Precedence Diagramming Method [FS, FF, SS, SF] (+ Example).” Project-Management.info.
[Link] “Precedence Diagram Method (PDM).”  AcqNotes, January 1, 2023.
[Link] “Arrow diagramming method.”  Wikipedia, September 18, 2021.
[Link] Service Management, 8ed.  James A. Fitzsimmons, Mona J. Fitzsimmons, and Sanjeev K. Bordoloi.  McGraw-Hill/Irwin, 2014.
[Link] “Look at four ways to set precedence relationships in your schedule.”  tommochal.  TechRepublic, January 28, 2008.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Commercial Cartography – Vol. VIII:  Cause & Effect Diagrams]]>Wed, 08 Mar 2023 14:00:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-viii-cause-effect-diagrams     A cause & effect diagram is best conceptualized as a specialized application and extension of an affinity diagram.  Both types of diagram can be used for proactive (e.g. development planning) or reactive (e.g. problem-solving) purposes.  Both use brainstorming techniques to collect information that is sorted into related groups.  Where the two diverge is in the nature of relationships represented.
     An affinity diagram may present several types of relationships among pieces of information collected.  A cause & effect diagram, in contrast, is dedicated to a single relationship and its “direction,” namely, what is cause and what is effect.
     The diagrams presented herein may also be known by other names.  Some use a slight variation, “cause-effect diagram,” or “C-E diagram” for short.  Many know them as “fishbone diagrams” because a common presentation format resembles a fish skeleton.  Others refer to “Ishikawa diagrams” in recognition of Kaoru Ishikawa, the technique’s creator or popularizer, depending on the source cited.  In this discussion, “cause & effect diagram” and “C-E diagram” will be used interchangeably to refer to the output of the exercise described, C-E analysis.  Alternative formats, shown later, demonstrate the advantage of using a universal name for the diagrams.  This allows focus to remain on the content and purpose of the diagram without needless distraction by variations in the display format.
     Numerous variations in the content – particularly the group names or headers – are also possible.  To develop a foundational understanding of the process, a “traditional” C-E diagram development is the focus of the following presentation.  Information on common variations is included as unintrusively as possible.

Go Fish!
Manufacturing
     Cause & effect diagrams originated in manufacturing problem-solving, or troubleshooting; use in this context remains its most common.  The “traditional” set of cause categories – the affinity diagram headers – is known as “the 6Ms,” an alliterative mnemonic.  This set includes:
  • Man – or huMan, huwoMan…  Any huperson, really.  By definition, “man” refers to any human being or person.  A jocular jab at modern society’s abuse of the English language should cause no shock, nor concern; feel free to ignore it!
  • Machine – or equipMent, broadly defined.  Any mechanical or electronic device, tool, vehicle, etc. employed by a person, or operating autonomously, is included in the machine category.
  • Material – or Matter.  Any substance used in the creation of the product, whether a major component or a superficial additive, belongs in the material category.  For example, mold release and die lubricant are not components of the final product, but are critical to their respective manufacturing processes.
  • Method.  Any procedure, technique, or ”trick” – whether or not it is delineated in process documentation – is a method for purposes of analysis.
  • Measurement.  This category typically references concerns about physical product attributes, such as:
         Process attributes may also require review.  Examples include temperature, gas flow rate, press force, operator process time, and innumerable others.
  • Mother Nature – or environMent.  A useful term for purposes of the mnemonic, it should not be taken too literally.  The environment of concern can be natural or controlled.  It may be on a large scale, such as an entire facility or even the outdoors; it can also be on a small scale, such as an individual room (e.g. laboratory) or inside a heat-treat furnace.
     A generic diagram, using the 6M categories, is shown in Exhibit 1.  The problem statement, or effect to be analyzed, is shown at the end of the center spine.  Branches, or bones, on either side of the spine are labelled with major cause categories.  Lines drawn from these branches are each labelled with a primary cause of the effect identified; secondary causes branch from primary causes, and so on.  Additional levels of “sub-causes” can be added if necessary, using a “5 Why” technique or similar approach to identify the most fundamental cause of the problem under investigation.  A 6M diagram example is shown in Exhibit 2.
     Construction of the diagram progresses following the pattern “[current branch label] is caused by [lower level branch label].”  For example, in Exhibit 2:  “Iron in Product is caused by Materials issues, including raw material issues such as water (H2O) from the city distribution system.”  A statement like this can be used to generate the branches in the diagram.
     To verify proper construction of the diagram, the process can be reversed, reading the diagram according to the following pattern:  “[lower level branch label] causes [next higher level branch label].”  Doing this, the example above becomes “the city’s distribution system contaminates (causes an issue) water (H2O), a raw material for our product (Materials issue) causing Iron in Product.”  The generic pattern is modified to translate abbreviated, truncated, or otherwise simplified notes into an intelligible statement.  The phrasing may be clunky, but it only needs to confirm that causal relationships are properly represented.  Statements like this can be refined for presentation purposes if deemed necessary or useful.
     The troubleshooting and problem-solving example is a “negative effect” diagram, where the goal is to eliminate the negative effect, or problem, identified.  Causes identified can be used to create or improve an FMEA and various controls.
     A “positive effect” C-E diagram can also be constructed by replacing the problem statement with a goal statement.  The diagram is then populated with information that can be used to attain the desired outcome.  Project managers, product development planners, customer service agents, and others can benefit from conducting this type of analysis.
     The preceding presentation describes the cause enumeration type of C-E analysis.  Other types of C-E analysis and diagrams are presented in the following sections.  Later, additional tips for construction and utilization of various diagrams are provided.
 
Service
     Where manufacturing has the 6Ms, service industries have the 4Ss:
  • Surroundings – similar to Mother Nature.  Here, the environment is evaluated for its impact on the customer experience, which may differ from a process-performance perspective.  The ease with which a customer can find and engage a service, their comfort level, and aesthetic factors are included in this category.
  • Suppliers – similar to Material.  The inputs to a service effect its delivery, including materials, equipment, and other services.
  • Systems – similar to Method.  Policies and procedures that define the structure of service and the tools required to deliver consistently are included here.  Examples include order-taking, communication, and verification.
  • Skills – similar to Man.  This category identifies the abilities required for successful delivery of a service.  It can be used to develop training plans and evaluation criteria.
     The bones of a service C-E diagram are shown in Exhibit 3 as a negative effect diagram.  A positive effect diagram is created by replacing “The Problem” with a service outcome to be achieved.  A specific characteristic, such as “wait time < 5 minutes,” is usually more productive than a more general, overarching goal, such as “customer delight.”  It may take an entire school of fish to achieve customer delight!
     An example 4S diagram is shown in Exhibit 4.  This diagram has a lot of fishhead (more on this later), but few bones.  Each cause shown should be explored further, expanding the diagram content accordingly.  The diagram’s construction should be double-checked, ensuring that the cause-effect relationships are presented properly.
 
Marketing
     The 4Ps are well-established and widely-known levers of marketing.  The 4Ps are:
  • Product – what is on offer.  This could be physical product, a service, or a digital token (e.g. cryptocurrency, NFT).
  • Price – cost of the product.  Discounts, rebates, and competitive comparisons are all considered in this category.
  • Place – where customers access the product.  Both physical and virtual worlds are considered here.  The position of a photo on a webpage and physical location of a product on a store shelf are analogous; they are chosen for visibility and ease of purchase.
  • Promotion – incentives and perception of product.  Advertising, branding, endorsements, sponsorships, and “influencers” are considered in this category.
     Similar to the previous types, a marketing C-E diagram can be used to deconstruct a market failure (negative effect) or to capitalize on a market opportunity (positive effect).  That is, both proactive and reactive uses are again possible.
 
Process
     A process analysis diagram extends the cause enumeration diagram, creating a hybrid.  It combines aspects of a process flow diagram (PFD) and a “traditional” fishbone diagram.  Potential causes of failure, or contributors to success, are identified for each process step.  Traditional cause categories, presented above, can be used, or process-specific categories can be defined.  The process steps themselves can also serve as the major cause categories (discussed further in the next section).  An example process C-E diagram is shown in Exhibit 5.  This structure incorporates an additional ‘level’ of analysis, narrowing the focus of development efforts for each contributor or cause.
Variations
     In addition to cause enumeration and process analysis diagrams, some authors (e.g. Kolarik) cite dispersion analysis as a third distinct diagramming effort.  Dispersion C-E diagrams present causes of product or process variation rather than failure.  However, a cause enumeration diagram with a problem statement referring to excessive or undesired variation achieves, essentially, the same result.  For this reason, dispersion analysis is undifferentiated from cause enumeration in this and many other discussions of C-E diagrams.
 
     The major cause categories presented were described as “traditional” for lack of a better descriptor.  Like other tools, C-E diagrams have evolved and mutated with time and experience.  The 6Ms, commonly associated with manufacturing, have been expanded to include such items as the following:
  • Maintenance – often included in the Machine category; a large number of bones may justify creating a separate category for analysis.
  • Management – often implied in Method or other cause category or implicated by cause descriptions.  A distinct category may be useful for presentation purposes or assignment of tasks.
  • Money – implied everywhere, all the time.  With enough money, virtually any problem can be solved!  On a more serious note, a struggling business or startup may find it useful to separate this category to address cash flow issues, availability of short-term financing, investment opportunities, tax burdens, etc.
 
     Some advocate for adding “safety” to the 4Ss of service C-E analysis.  In addition to the dangers of grouping safety with other concepts or pursuits discussed in “Safety. And 5S.,” there is another flaw in this logic.  “Safety” is typically considered as an outcome or goal, or the lack thereof as a problem to be solved.  Presenting safety as a category of causes distorts perceptions of this most-important topic, leading to poor decision-making.
 
     The 4Ps of marketing have also been expanded for C-E analysis.  Marketing C-E diagrams often include the following additional categories:
  • People – or Personnel.  All hupersons still qualify.  Sales staff, “influencers,” trade show booth attendants, or anyone else associated with a business, brand, or product are accounted for here.
  • Process – standard product introduction, or “rollout,” campaign templates, etc. that support consistent presentation to customers.
  • Packaging – the first interaction a customer has with a product involves its packaging.  Its influence on customer perceptions of quality, capability, ease of use, and so on should be evaluated.  Practical matters, such as product protection, branding, differentiation, etc. must be also be considered.
  • Policies – a broad category that includes use of distribution channels, perks for preferred customers, and so on.
 
     Some sources reference the CEDAC – Cause and Effect Diagram with the Addition of Cards.  This “type” of diagram differs only in its method of construction; the content and use remains the same as previously discussed.  C-E diagrams are often constructed by a small, dedicated team in a brainstorming session; this has become such a common assumption that doing otherwise prompted coining a new name.  Rather than relying on a small team in isolation, a CEDAC is constructed in public view and by the entire organization.  Any member of the organization can provide input to the C-E analysis by adding their own “cards” to the diagram.
     Though it is not really a different “type” of diagram, the CEDAC provides significant advantages, such as:
  • It utilizes the experience and insight of the entire organization to draw a more-detailed picture of the situation and, thus, make better decisions.  Employee engagement and retention also tend to be higher in an inclusive environment.
  • It fosters transparency; problems and decisions are visible to all members of the organization and discussed openly.
  • Products, processes, and the influence each person has on their success are better understood by everyone in the organization.
  • It promotes widespread, spontaneous training on the technique, increasing the advantages described above.

     Though we have not yet strayed from the fishbone style, cause & effect diagrams may be presented in other formats.  When a C-E diagram is created to seek the root cause of a problem, it may be drawn within a graphic of a tree, as shown in Exhibit 6.  The advantage of this format is that it constantly reinforces the idea of a “root” in participants’ minds; construction is carried out in much the same way as in the fishbone format.  Even in the fishbone format, sub-causes are often referred to as “branches,” reinforcing the idea that these formats are interchangeable.
     A table format C-E diagram is less visually striking, but equally effective once users are familiar with its layout.  It is often generated in a spreadsheet, as no graphics are required; the widely-available tool provides a simple mechanism for storing and sharing the results of a C-E analysis.  A table format is a less-literal version of the tree diagram, as shown in Exhibit 7.
Tips for C-E Diagram Construction
     Realistically, there are few tips needed to begin C-E analysis.  Members of a team assembled to conduct an analysis should have relevant experience; this is not always the same as direct experience.  For example, experience with a process similar, though not identical, to one under scrutiny can be valuable.  Likewise, a team member with experience creating C-E diagrams in manufacturing can help a product development team learn the technique.  Experience is often transferable; teams should not be too quick to exclude those with differing backgrounds.
     If the CEDAC approach is chosen, the entire organization’s experience is applied to the effort.  If it exists in the organization, and the organization has a healthy culture, the necessary information will be included in the diagram.
 
     Regarding the content of a diagram, the cause categories presented are merely suggestions.  Less-experienced individuals or teams often benefit from beginning their analyses with “traditional” categories.  As experience with the technique and the subject of analysis is gained, use of alternative major cause categories may be found to be more effective.  An example C-E diagram that uses alternative categories to good effect is shown in Exhibit 8.  Neither the names nor the number of categories is fixed; adjust as needed!
     One reason to modify the cause categories may be the developing diagram itself.  If there is a large number of bones in any one category, consider how that category may be logically subdivided to make the information more manageable.  If there are very few, determine if it is truly relevant to the objective.  If it is, consider combining it with another category.  A new category name may be needed to accurately represent the combined content.
 
     Many sources of instruction define the format of C-E diagrams much too rigidly.  For example, placement of the problem statement or goal on the right-hand side of the diagram is presented as if doing otherwise nullifies the entire effort.  It does not!
     Sure, we English-speakers read from left to right, but we also write from left to right, which can make whiteboard construction of a diagram awkward if the “rule” is enforced.  Being left-handed may exacerbate this further.  Creating a spreadsheet table in this manner would also be highly inefficient and tedious.  Likewise, the angle of cause branches from the spine, parallelism of fish bones, placement of causes at the ends of bones, and other “rules” are more flexible than they are typically presented to be.
Constructing a C-E diagram can be a fast-paced and messy exercise.  Once complete, a “clean” copy of the diagram is often created for future use.  When this is done, following the conventions (or rigid “rules”) may be advantageous and is often recommended.  Until such time, scribble the information in any manner that prevents it from being lost in the chaos of the exercise.

     Conducting C-E analysis and constructing a diagram follows a process that overlaps with that of creating an affinity diagram.  Brainstorming, sorting, and labelling can be performed in the same manner.  Final arrangement in the diagram results from defining the cause and effect relationships of the captured ideas and further probing for sub-causes.
     When performed for product planning, or other proactive purpose, execution is usually very linear in nature; each step is completed before beginning the next.  Each major cause is thoroughly investigated in succession, identifying data, tests, etc. that will be needed to ensure a successful design.  This information can be used in development plans, DFMEA, and other design documents.
     When performed to aid problem-solving, particularly in urgent situations, tasks may be done concurrently.  The most likely causes are selected by experience and currently-available data that can be immediately and quickly analyzed.  Verification testing and data requirements are identified to confirm that the problem has been eliminated.  Information gathered here can be incorporated in a PFMEA, Control Plan, or other process documentation.

     The graphical formats presented provide visual aids to understanding relationships among the information contained in a diagram.  However, the graphics can become a distraction or a gimmick; instead of aiding comprehension, it begins to interfere with it.  Exhibit 4 was chosen to demonstrate the potential for graphical presentation to run amok.  While the enormous fishhead may attract the attention of an elementary school class, the information that a management team needs to make effective decisions is virtually illegible.  Graphical flourishes are most useful for introductions to the technique and terminology; as familiarity grows, the graphics should fade into the background or be eliminated.

     These tips can be summarized as follows:  use the resources available, in the manner that best suits the organization and the situation, regardless of what “the rules” require.  “The Third Degree” is here to present alternatives and help practitioners understand them; it is not here to impose arbitrary restrictions that provide no meaningful safeguards or other advantage.
 
Summary and Conclusion
     The C-E analysis process in brief:
  • Form a team or choose to leave open to everyone.
  • Define the problem or goal.
  • Choose the type and style of diagram to be created.
  • Brainstorm causes or contributors.
  • Sort causes or contributors into major categories (cause enumeration) or process steps (process analysis).
  • Probe each cause for sub-causes (i.e. root cause).
  • Double-check that cause-effect relationships are properly represented.
  • Finalize and “clean up” the diagram.
  • Utilize the information to solve the target problem or achieve the desired outcome.
  • Transfer information to appropriate documents (e.g. FMEA).
  • Review the process followed to learn, improve, and teach others.
     Like other tools presented in “The Third Degree,” the cause & effect diagram appears simple, but has a lot of hidden value.  Readers are encouraged to explore this and other “simple” tools to discover new applications or modes of thinking about their objectives.  Creative adaptation of existing tools and techniques is far better than reinventing the wheel.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

     For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I: An Introduction to Business Mapping (25Sep2019).
 
References
[Link] “What is a Cause and Effect Diagram and How to Make One.”  Smartdraw.com
[Link] “What is a Fishbone Diagram?”  American Society for Quality, ASQ.org
[Link] “9+ Fishbone Diagram Teemplates - PDF, DOC.”  template.net
[Link] Creating Quality.  William J. Kolarik; McGraw-Hill, Inc., 1995. 
[Link] “Guide To Root Cause Analysis - Steps, Techniques & Examples.”  Software Testing Help.
[Link] “The Ultimate Guide to Cause and Effect Diagrams.”  Juran.
[Link] The Six Sigma Memory Jogger II.  Michael Brassard, Lynda Finn, Dana Ginn, Diane Ritter; GOAL/QPC, 2002.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Safety. And 5S.]]>Wed, 22 Feb 2023 14:00:00 GMThttp://jaywinksolutions.com/thethirddegree/safety-and-5s     In many organizations, complaints can be heard that there are too many programs and initiatives targeting too many objectives.  These complaints may come from staff or management; many of them may even be valid.  The response to this situation, however, is often misguided and potentially dangerous.
     To streamline efforts and improve performance – ostensibly, at least – managers and executives may discontinue or merge programs.  Done carelessly, consolidation can be de facto termination.  A particularly egregious example of this practice is to combine safety and 5S.
2023-02-22_safety__and_5s_.mp3
File Size: 6812 kb
File Type: mp3
Download File

Safety
     Many organizations have combined safety and 5S, creating a “6S” program.  Doing so signals a critical disconnect between management and frontline employees or, worse, a disregard for frontline employees.
     Using “6S” in this manner implies that each “S” is of equal stature.  They are not.  Adding insult to injury (pun intended), safety is often added at the end of the list!  5S can contribute to safety by eliminating hazards, but remains subservient to safety.  Beginning with the same letter is not sufficient justification for consolidating efforts, ideas, or things.  Imagine Alaska and Alabama as one state!
     Use of “6S” can also be confused with a Six Sigma quality initiative.  At times, “6S” is substituted for “6σ” when the “σ” character is unavailable or “6Sig” is simply too long.
     Safety is the greatest responsibility of any organization.  Employees, customers, and partners rely on management to maintain safety as the highest priority.  To do this, it must stand alone from – no, stand above – all other initiatives, programs, and objectives.

And 5S
     The ubiquity of 5S training material renders a detailed presentation here unnecessary.  The following is the briefest of introductions for those unfamiliar with the 5S “process” and a refresher for those with previous exposure.
     The first attempt at 5S will be a sequential process, following the steps in the order described below.  Subsequent improvement efforts may include some iteration, but will not necessarily return to “step 1” for each change.  Without further ado:
  • Sort (Seiri) – remove unnecessary items from the work area.
  • Set (Seiton) – organize required items in accordance with the sequence and manner of use.
  • Shine (Seiso) – clean and optimize the work area.
  • Standardize (Seiketsu) – document the optimized configuration and transfer lessons learned to other work areas.
  • Sustain (Shitsuke) – ensure consistency by maintaining the work area according to the documented standard.
     For a more detailed presentation of 5S techniques, notable references include:
  • American Society for Quality (ASQ) – 5S Tutorial
  • Gemba Academy Glossary – 5S
  • Lean Enterprise Institute (LEI) – 5S search
 
     For additional guidance or assistance with safety, 5S, or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com               
]]>
<![CDATA[Commercial Cartography – Vol. VII:  Affinity Diagrams]]>Wed, 08 Feb 2023 14:00:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-vii-affinity-diagrams     An affinity diagram may stretch the definition of “map,” but perhaps not as much as it first appears.  Affinity diagrams map regions of thought, or attention, within a world of unorganized data and information.
     Mapping regions of thought in an affinity diagram can aid various types of projects, including product or service development, process development or troubleshooting, logistics, marketing, and safety, health, and environmental sustainability initiatives.  In short, nearly any problem or opportunity an organization faces can benefit from the use of this simple tool.
     Numerous resources providing descriptions of affinity diagrams are available.  It is the aim of “The Third Degree” to provide a more helpful resource than these often bland or confusing articles by adding nuance, insight, and tips for effective use of the tools discussed in these pages.  In this tradition, the discussion of affinity diagrams that follows presents alternative approaches with the aim of maximizing the tool’s utility to individuals and organizations.
     It is a simple process to create an affinity diagram; much of it is familiar to readers that have used other tools or “maps.”  Similar to a service blueprint, a large table, whiteboard, or wall should be used to allow several people to engage in the physical creation process simultaneously.  Use of sticky notes or index cards and masking tape allows rapid rearrangement of information.  Use of markers and large print allows participants to read cards from a distance of several feet.
     Creating an affinity diagram begins with a brainstorming session, or similar information-collection exercise, conducted with a cross-functional team of 5 – 7 members.  If more are needed to capture all relevant viewpoints, consider conducting multiple sessions, consolidating information in a later step.
     A facilitator is assigned to commence and monitor the proceedings, keeping the group on topic and on schedule.  To commence a session, the facilitator poses a concise question to the team, displaying it prominently in the group workspace.  The question can be related to a problem to be solved, a new opportunity, or any topic for which group consideration could be helpful.
     Each member of the group records responses to the question on sticky notes or index cards (“cards”), adding each to a central collection.  Typical brainstorming rules should be followed.  Particularly important is to refrain from discussion and debate; free-flowing, uninhibited ideas are needed.  If the team or facilitator prefers, all contributions could be recorded on a whiteboard, flipchart, etc. and transferred to cards in the next phase.  This approach may be advantageous if there is concern that a slower process may cause some ideas to be lost.
     When all of the group’s responses have been recorded, the team then begins to organize the cards by clustering similar or related ideas.  That is, ideas are grouped according to their affinity.  Participants may view these relationships differently; a card may be moved several times before there is agreement on the most appropriate placement.  If no consensus can be reached in the initial sorting process, a card may be duplicated; the idea is then included in multiple groups for further consideration.
     During the reorganization, or after it is complete, a label that describes each group becomes evident.  These labels are placed on “header cards” to identify each affinity group.  Any member of the group can create a header card or edit one created by another member.  Every card need not be grouped with others; standalone ideas are normal and permissible.
     Affinity groups can be joined under “superheaders” or further divided under “subheaders.”  The level of “filtration” most appropriate depends on the size of the groups and the way in which collected information will be used.  No rule can be defined; the only guideline is to do what works best for the team engaged and the objectives pursued.
     Many tutorials on affinity diagrams instruct the team to sort the cards without speaking.  There are several issues with an enforced silence approach, however, including:
  • It is unnatural among a group of familiar people.
  • It delays the discovery of team cohesion issues by facilitating passive-aggressive behavior.
  • It treats professionals like children, turning the facilitator into an elementary school librarian.  Constant “shushing” is more offputting than the dialogue it seeks to suppress.
  • It precludes clarification of ideas for efficient grouping.
  • It delays decisions to duplicate cards, slowing the sorting step.
It is preferable for the facilitator to advise the team to minimize the “noise,” limiting conversations to clarifying questions or other brief exchanges that accelerate development of the diagram.  Detailed discussions of the merits of ideas, priorities, etc. should be quashed until the sorting step is complete.  A pictorial summary of the process thus far is presented in Exhibit 1.
     Presenting a wide range of ideas in this way assures every member of the cross-functional team is cognizant of the entire scope of a situation.  Too often, employees in siloed departments are unaware of the efforts taking place outside their area of responsibility, leading to “turf wars.” Departments vie for resources without understanding the implications for the organization as a whole.  An affinity diagram can provide the “big picture” understanding needed to encourage teamwork that leads to improved decision-making and resource allocations.
 
     After the cards have been sorted and labeled, the ideas in each group are subjected to closer scrutiny.  Again, this parallels a traditional brainstorming session, where contributions are evaluated only after collection is complete.  For smaller diagrams, all contributions may be evaluated by the entire team; clusters within large diagrams may be assigned to a subgroup for review.  These reviews may occur concurrently, as a component of the structured exercise, or independently.  Also, an individual may convene his/her functional department to review the cards and refine the information gathered.
     During the review, duplicate ideas are eliminated and those that are very similar may be combined.  The information on a card could also be split if doing so makes it more manageable.  The merit of each idea is thoroughly assessed; unrealistic or infeasible ideas are set aside.  Remaining ideas are prioritized and more fully developed.  Tasks and projects are defined based on the priorities established.  Cards that had been set aside are placed at the bottom of the stack – literally and figuratively – to be reprioritized should changing conditions effect their feasibility.
 
     Like other large-format map creations, it is possible to simply leave a finalized affinity diagram on a conference room wall.  However, subsequent activity spurred by the diagram, as well as the maintenance of organization-wide awareness, is aided by transforming it into an easily-distributed format.  This often means transcribing the information in an electronic document of some kind.  Common programs, such as spreadsheets and presentation-builders, can be used; single-function software or online tools are also available for this purpose.  In the simplest case, a digital photograph of a wall-sized diagram can be shared, provided the cards are legible in it.
     An example of what a transcribed affinity diagram may look like is presented in Exhibit 2.  The color code often recommended is depicted:  yellow for each idea, blue for headers, and pink for superheaders (or pink for headers and blue for subheaders, if this terminology is preferred).  Other colors may also be used to identify additional levels of filtration.  The choice of colors is not critical, but should remain internally consistent to avoid confusion.
     The example in Exhibit 2 is greatly simplified; an affinity diagram can have dozens or hundreds of items in any number of categories and levels.  Nonetheless, it demonstrates the flexibility of the tool.  Headers and subheaders can be organized differently depending on the phrasing of the foundational question, participants’ points of view, and so on.  For example, a railroad museum is grouped under Man-Made/Transportation, suggesting an interest in machinery and construction.  It could also have been grouped under the Historical header to reflect an overriding interest in the impact railroads had on the culture and economy of SC.  The railroad museum could also have been placed in both groups.  The objectives to be pursued ultimately determine what arrangement is most appropriate; the facilitator provides insight, guiding the team to a consensus.
     Another example diagram is shown in Exhibit 3; though no color code is used, the information is easy to read in this format.  The labels chosen for grouping issues related to medication delivery are associated with components of the system.  It would be understandable had this team, in its haste to improve patient care, skipped a step.  The issues identified could have been sorted under headers such as doctor, nurse, pharmacist, and administrator, identifying the party responsible for resolution of each.  However, this approach could stifle creativity and collaboration in the search for solutions; it is best to assign responsibility only after the logical structure of the diagram has been established and agreed upon by team members.
     While a distributed document is helpful, the physical artifact can still be useful.  If the clusters within a diagram are used to make task or project assignments, for example, a stack of cards, with a header card on top to identify its contents, can be placed in the hands of the responsible individual.  For some, a physical artifact provides motivation and inspiration that an electronic document simply cannot match.  Each card also provides a convenient place to jot additional notes, make sketches, etc. for use in the activity it represents.
     A team may want to consider various sorting and labeling schemes before finalizing an affinity diagram.  The caveat regarding premature assignment of responsibility notwithstanding, viewing the collection of information in various configurations can provide further insight into the challenges the team will face when applying a finalized diagram to chosen objectives.
     One method of refining a problem definition is to combine subgroups until a minimum number of superheaders is attained.  If the information can be distilled to two orthogonal sets, the situation is reframed, facilitating decision-making and prioritization.
     To illustrate this example, consider the schematic in Exhibit 4 that represents a generic customer complaint review.  Various types of issues can be identified – durability, documentation, speed, customization, and on and on.  And on.  Thorough consideration of these issues, in this example, reveal that two types of customer are not well-served:  novices and superusers.
     This revelation reveals that a fundamental decision must be made before the information in the diagram can be effectively utilized:  what market(s) will be targeted?  The following choices are available:
  • Expand capabilities to satisfy both novices and superusers.
  • Develop product/service to cater to superusers.
  • Develop product/service to cater to novices.
  • Replace current offering with two separate products/services; one for each user type defined.
  • Maintain existing product/service offering (the “do nothing” option).
  • Withdraw the product/service from the market with no replacement.
Had this bifurcation of issues not been discovered, efforts to address customer issues could be misguided or haphazard.  Resources could be wasted with no discernible improvement in performance.  This example is applicable to product/service development and marketing efforts.
 
     Though development of an affinity diagram has been presented as a mostly linear process, like others discussed in these pages, this is not strictly true.  Idea generation and sorting often overlap; each activity ebbs and flows within the overall effort.  Experienced practitioners are more likely to do this naturally, as relationships between ideas are more readily apparent and cohesive teams have been established.
     It has also been presented as a group exercise; teams of coworkers addressing business issues is the archetypal application.  However, affinity diagrams are equally applicable to individual pursuits.  Even a team of one benefits from organized information, structured review, potential reframing of an issue, and other discoveries that can inspire creative responses to a difficult situation.
     An affinity diagram is a deceptively simple tool.  Despite its apparent lack of complexity, the insights that can be derived from it and the purposes for which it can be utilized are vast and impressive.  Simply put, organized information is easier to process and utilize.  When a more rigorous collection and refinement process is used, it may be referred to as the KJ Method. This name honors its creator, Jiro Kawakita, a Japanese anthropologist, who formalized the method in the 1960s.
 
     For additional guidance or assistance with mapping or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I:  An Introduction to Business Mapping (25Sep2019).
 
References
[Link] The Six Sigma Memory Jogger II.  Michael Brassard, Lynda Finn, Dana Ginn, Diane Ritter; GOAL/QPC, 2002.
[Link] “What is an Affinity Diagram?”  American Society for Quality.
[Link] “Affinity diagram.”  Productivity-Quality Systems, Inc.
[Link] “Affinity Diagrams: How to Cluster Your Ideas and Reveal Insights.”  Interaction Design Foundation.


Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Commercial Cartography – Vol. VI:  Service Blueprints]]>Wed, 25 Jan 2023 14:00:00 GMThttp://jaywinksolutions.com/thethirddegree/commercial-cartography-vol-vi-service-blueprints     Most manufactured goods are produced and distributed to the marketplace where consumers are then sought.  Services, in contrast, are not “produced” until there is a “consumer.”  Simultaneous production and consumption is a hallmark of service; no inventory can be accumulated to compensate for fluctuating demand.
     Instead, demand must be managed via predictable performance and efficiency.  A service blueprint documents how a service is delivered, delineating customer actions and corresponding provider activity.  Its pictorial format facilitates searches for improvements in current service delivery and identification of potential complementary offerings.  A service blueprint can also be created proactively to optimize a delivery system before a service is made available to customers.
     Various definitions of “service” are available in the service management literature.  In lieu of a formal definition, we will focus on key characteristics of services that differentiate them from products:
  • Service delivery is a process.
  • A system is used to deliver a service.
  • Service delivery requires coordination of a customer (“consumer”) and a provider (“producer”).
  • A service can be repeated, but cannot be reused.
 
The Value of Blueprinting
     Understanding what makes a service unique is critical to effective management.  A blueprint makes a significant contribution to this effort; use of the term has important connotations.  Blueprints are more commonly associated with manufactured products, where each is identical to the next.  While services can be highly variable in content or outcome, customers’ experience with a provider need not be.  A consistent customer experience is what a service blueprint documents.
     A well-developed service blueprint can serve several purposes, including:
  • rapid identification of the cause of a service failure, such as a skipped step,
  • rapid identification of appropriate recovery processes following a service failure,
  • recognition of mismatches between service delivery and customer expectations or desires,
  • identification of redundancies or potential efficiency improvements,
  • identification of potential service enhancements,
  • development of complementary service offerings,
  • evaluation of potential market segmentation, and
  • training new employees.
 
Preparing to Blueprint
     Before attempting to create a blueprint, some “guardrails” are needed.  The first step is to define the scope of the service to be blueprinted.  The scope definition can include the location, type of service, market segment, number of parties involved, group size, and so on.  Record any aspect of the service in which a deviation changes the requirements of the delivery system.  Clearly defining the scope informs a determination of the number of blueprints needed to fully document a service business.
The status of the process to be documented must be known to develop the appropriate type of blueprint.  A “current state map” documents an existing service delivery system, including its flaws and “dead ends” where no defined process exists; recent performance data should be incorporated in the blueprint.  It is critical that a current state map be thorough and accurate, lest the value of the effort be squandered.
     A “future state map” documents a service delivery system as envisioned after proposed modifications have been implemented.  The blueprint shows how a service would be delivered with redundancies removed, dead ends remedied, efficiencies improved, enhancements included, and other improvements made.
     A “service development blueprint” proposes a service delivery system to be established.  It provides opportunities to simulate the system and optimize predicted delivery performance before customers subject it to the ultimate test.
     Two previous installments of “The Third Degree” are recommended for review in conjunction with this discussion:
The first is applicable to all types of service blueprints; the similarities between service blueprints and process flowcharts will become apparent.  Reference the second to assist development of an initial service definition or refinement of an existing delivery system.
     Membership of a development team may be obvious in, say, a two-person startup.  Larger organizations, however, may have to consider resource tradeoffs to form a team.  In any case, all members of the blueprinting team should be clearly identified to secure their participation.
     The final preparatory decisions to be made relate to the method by which a blueprint will be physically created.  Initial development of a blueprint should take place on a scale large enough for the entire team to engage with it simultaneously; sufficient scale can be achieved using a large table or wall.  It should be easily reconfigurable, or editable, for rapid incorporation of new ideas.  This is typically accomplished using a whiteboard or “sticky notes.”
     When the content of a blueprint has been “finalized,” a “clean copy” can be created, either on paper or in software.  A refined version is useful for formal presentations; however, it is often unnecessary.  If the blueprint is viewed exclusively by the development team – i.e. it remains an internal document – a presentation-quality version may add little value.  An important exception is the use of the blueprint for training purposes; documents that are easy to understand accelerate learning.
 
Creating a Blueprint
     On the large-format medium where the blueprint will take shape, establish the basic layout needed to organize information.  All that is required to do this is to place the labels along the left side and draw the dividing lines described below.  An example of a blank service blueprint is shown in Exhibit 1.  Some changes in terminology from that shown in Exhibit 1 are recommended; from top to bottom, identify the following rows and boundaries:
  • Tangible Evidence – artifacts that are used to deliver the service, prompt an action, or demonstrate completion of a step in the delivery process.  A less-literal term than “physical evidence” is appropriate for many modern services.
  • Customer Action – actions taken by a customer to initiate and proceed through a service delivery process.
  • Line of Customer Interaction – visually separates customer actions from provider actions.
  • Onstage Action – provider actions that a customer “sees” or is aware are taking place.  “Contact Employee” has been dropped from this label because it may be misleading as the nature of services continue to evolve.
  • Line of Visibility – visual representation of the separation between “front office” and “back office” activity.
  • Backstage Action – provider actions that are hidden from the customer’s “view,” but are necessary for service delivery.  Tasks may be completed by the onstage actor or another; “contact employee” has again been dropped from the label.
  • Line of Internal Interaction – visually separates activity directly related to a specific service delivery and the general-use tools, managerial procedures, and support staff.
  • Support Processes – information technology-based processes, other organizational-level tools, support staff and management activity.
     The need for flexible terms increases as services evolve to include more technology and remote delivery features.  Their value is more evident when analyzing modern service delivery systems than in the context of traditional face-to-face services.
 
     To begin populating the diagram, generate a list of all actions taken by a customer throughout the service experience.  Enter these in the Customer Action row, chronologically from left to right, in the sequence that they occur.  The timeline created need not be to scale, but the actions should be spaced sufficiently to allow the team to build the blueprint around them, making changes and additions as necessary without the need to start over.
     For each customer action, identify the Onstage Action that it triggers or that it was triggered by.  An action by either customer or provider cam prompt a response by the other; identify all onstage actions with either causal relationship.  Corresponding actions are typically aligned in a column as a spatial representation of their relationship.
     The direction of an arrow connecting the two actions represents the causal relationship.  The arrow originates at the prompting action and terminates at the responsive action.  It is possible for this relationship to be bidirectional (depicted by a double-headed arrow).  That is, either party’s action can prompt the corresponding action from the other, like a handshake or a negotiation.  Pairs of corresponding customer and onstage provider actions are often called “touch points” of the service.  Metaphorical use of this term is in order; it represents an interaction between customer and provider, but physical contact, or even colocation, may not be required.  Some services require physical contact (e.g. haircut, massage), but an increasing number can be completed remotely.
     Continue building the blueprint by adding onstage actions that are not paired directly with a customer action.  For example, a single customer action may prompt a series of onstage actions, each of which needs to be included on the blueprint.  This is one reason why the initial spacing of customer actions is important.
     Next, add backstage actions – those activities “hidden from view” of the customer – and connecting arrows indicating their prompts and actions that are, in turn, prompted by them.  Backstage actions can be connected to onstage or customer actions or to support processes, which are to be added next.
     Support processes may include information technology, third-party-provided services, coordination or facilitation processes (e.g. scheduling, work planning), or other “hidden” processes that improve service delivery.  Support processes can be associated with onstage or backstage actions.
     At this point, operation of the service delivery system is well-defined.  To complete the blueprint, populate the first row with tangible evidence employed throughout the service delivery.  The position of each corresponds to the action that prompted its use or creation.  That is, its position in the row indicates the timing of the use or creation of an artifact during service delivery.
 
     Additional information can be included on a service blueprint if the development team deems it useful.  The duration of each activity is one of the most common suggestions.  Considering the variability of services, an average or range of anticipated completion times may be more appropriate.  Citing a fixed duration may be misleading or demotivating.
     “Fail points” are also common additions to service blueprints.  These may be superfluous, however.  Realistically, a delivery system could fail at any point; visually indicating this fact may be unhelpful.  Flagging high-risk or frequent failures, where special attention is required, may be worthwhile, but must be done sparingly to remain effective.
     Color-coding may also be used to differentiate between actors or other logical compartmentalization of the delivery system.  Emotional states, performance metrics, and relevant regulations have also been suggested for inclusion.
     Only the basic components of service blueprints were included in the initial presentation to avoid distraction or overload.  Each development team must decide what elements to include based on the value each adds to the effort.  It is strongly recommended that a basic blueprint be created first and that other elements be added very selectively.  As elements and information are added, the blueprint may become more difficult to read; it may also encroach on other types of “maps” or diagrams that the organization may use.  Providing references or links to other sources of information can be an effective alternative.   Caveat creator!
 
     The steps of creating a service blueprint have been described sequentially for simplicity of presentation.  In reality, once a sequence of customer actions has been identified, the remainder of the blueprint will, in most cases, be generated much more organically.  Team members’ individual attention will be drawn to different aspects of the delivery system at different times, allowing the blueprint to take shape much more rapidly than is possible when the entire team works on each aspect collectively.  Changes are made and reviewed on an ongoing basis, allowing team members to engage with the blueprint more naturally than a strict process structure allows.
     Purely sequential development is likely to require several iterations before a plausible blueprint results.  Organic development can be as effective, if not more so, than a number of iterations and can be completed in much less time.  Therefore, the efficient service blueprinting effort may be one that is launched and finalized as a team, with a period of free-form activity between, as depicted in Exhibit 2.  For this to be true, however, the team members must be comfortable with this approach.  Significant experience with service blueprinting may be needed to achieve a sufficient level of comfort for the team to utilize the organic development approach effectively.
Service Blueprint Examples
     Reviewing examples of completed service blueprints is a good way to become familiar with the layout and comfortable with the creation process.  The examples below show some variation in content, terminology, and complexity.  The examples are not comprehensive, but are sufficient to demonstrate the technique’s flexibility.  New practitioners can use them for inspiration as they develop their own blueprinting “style.”
     A generic service blueprint is shown in Exhibit 3.  The generic blueprint does not identify a specific service; instead, it depicts actions and interactions that are common to many types of services.  However, it may serve as a useful guide; translating the generic descriptions into ones specific to the service of interest can kick-start the development process.
     The generic blueprint of Exhibit 3 includes a Line of Implementation between support processes and management action (rows are unlabeled, but this reflects their content).  This line was excluded from the preceding discussion for two key reasons:
  • Few authors include this line in their presentations or example formats.
  • Managerial actions are support processes; separating them adds little value to most service blueprints.
If the blueprint created is sufficiently complex to warrant separation of managerial actions from other support processes, the development team is free to do so.  Ultimately, the format that the team is most comfortable with is the best one.
 
     A blueprint for an overnight hotel stay is shown in Exhibit 4.  While it may be more aesthetically pleasing to some readers, the colors provide no additional information.  As the color only differentiates between rows, it is superfluous; thus, it remains a basic blueprint.  Aesthetic improvements are valid and acceptable, of course, but there should be no confusion about their purpose.
     The Blueprint for Engineering Services presented in Exhibit 5 uses the terminology recommended in Creating a Blueprint and introduces the use of “fail point” notations.  Only a small selection of fail points are identified as such to prevent it from becoming a meaningless designation.  The highest-risk points of failure – those that could derail a service, irreparably damage the customer-provider relationship, or invalidate an entire delivery system – are identified to ensure awareness of their criticality.
     Identifying fail points is only effective if those responsible for successful execution understand the implications.  For identification of a fail point to be helpful, the team must understand and communicate the following:
  • the consequences of failure at this point of the service delivery,
  • how to execute the required action correctly,
  • how to respond to a developing situation, i.e. an impending service failure,
  • how to recognize the occurrence of a service failure, and
  • how to recover from a service failure.
Readers should take note that fail points can exist at any point in the service delivery, including customer action.  A breakdown can be caused by customer or provider, onstage or backstage, or a support process can fail.  All aspects of the service delivery system must be considered to develop effective countermeasures.
           
     A service blueprint can also be expanded to include multiple interacting parties, such as when a broker or other intermediary facilitates a transaction between two independent parties.  An example of this type of expanded blueprint is shown in Exhibit 6.  The top of the diagram depicts one interaction, while the bottom of the diagram mirrors the top to show interactions with the second party.  Support processes are central in the diagram; they are used in the interactions between the intermediary and both parties, linking them at various stages of service delivery.
     The examples provided have not been identified by type (current, future, or development); taken out of context, it is unknown.  Their value as inspirational guides, however, is unaffected by this.  Each blueprint is unique; a development team need not be concerned with the motivation behind an example blueprint, whether it was created with data or imagination.
 
Final Thoughts
     Use of service blueprints can help providers achieve efficiencies and consistent delivery performance that contribute to profitability and customer satisfaction.  To develop an effective service blueprint, follow this very brief summary:
  • Define the scope of service.
  • Choose blueprint type – current state, future state, or development.
  • Select team members.
  • Choose method and acquire materials.
  • Generate list of customer actions as a team.
  • Develop blueprint organically.
  • Review and finalize blueprint as a team.
Other key points to remember:
  • Use rows to differentiate “actors.”
  • Use columns to establish “timeline” or temporal relativity.
  • Use arrows to show causal relationships.
  • Add information, beyond basic components, sparingly.
  • Link to detailed information (PFD, instructions, etc.) to keep blueprint simple.
     As experience is gained with service blueprints, the creation process, content, and perhaps the format will evolve.  Revisions will be necessary as technology and other aspects of service delivery changes and advances.  An imperfect blueprint is not a failure, it is a learning and growth opportunity.  As a blueprint is developed and revisited, new ideas emerge; customer and provider alike benefit from the experience.
 
     For additional guidance or assistance with service blueprinting or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
For a directory of “Commercial Cartography” volumes on “The Third Degree,” see Vol. I:  An Introduction to Business Mapping (25Sep2019).
 
References
[Link] “Designing Services That Deliver.”  G. Lynn Shostack.  Harvard Business Review, January, 1984.
[Link] “Service Blueprinting:  A Practical Technique for Service Innovation.”  Mary Jo Bitner, Amy L. Ostrom, and Felicia N. Morgan.  California Management Review, Spring 2008.
[Link] Mapping Experiences.  James Kalbach.  O’Reilly Media, 2016.
[Link] Winning the Service Game.  Benjamin Schneider and David E. Bowen.  Harvard Business School Press, 1995.
[Link] Service Management, 8ed.  James A. Fitzsimmons, Mona J. Fitzsimmons, and Sanjeev K. Bordoloi.  McGraw-Hill/Irwin, 2014.
[Link] “Service Blueprints: Definition.”  Sarah Gibbons.  Nielson Norman Group, August 27, 2017.
[Link] “The 5 Steps to Service Blueprinting.”  Sarah Gibbons.  Nielson Norman Group, February 4, 2018.
 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Resolution:  Set Goals]]>Wed, 11 Jan 2023 15:30:00 GMThttp://jaywinksolutions.com/thethirddegree/resolution-set-goals     By this time, many New Year’s resolutions have already been abandoned.  Those that have not may still be ineffective in changing behavior or achieving desired outcomes.  Setting goals, as a strategy, is far superior to making resolutions when it comes to reaching a desired future state.
     Goal-setting can be personal in nature, as resolutions typically are, or organizationally-focused.  In an organizational context, goals can be defined for individuals, groups or the organization as a whole.
     Research suggests that goal-setting can be very beneficial to individuals and groups alike, but it is not without risk.  This installment of “The Third Degree” shows how to tip the scales toward favorable individual or group outcomes by setting goals that are SMART, PURE, and CLEAR.
     The SMART, PURE, and CLEAR goal-setting models, used conjointly, comprise the John Whitmore Model.  Sir John Whitmore was a British racing driver, business coaching pioneer, and author.  Alternative formulations of the constituent models can be easily found; those presented herein are deemed the most useful in this combination.
 
SMART Goals
     Of the three models, SMART goals are the best known; numerous resources exist to offer variants.  Therefore, this section may be a refresher or an introduction to the chosen formulation, wherein the acronym asserts that goals should be specific, measurable, attainable, relevant, and time-bound.  Each component of the SMART goal-setting model is discussed in brief below.
Specific:  A well-defined goal is necessary to plan for its achievement and recognize success.  Vague statements of a goal are unlikely to motivate those responsible for its achievement.  It may also divert attention to defending a particular interpretation of a goal and away from productive activity.
Measurable:  Comparison to a standard – the essence of measurement – is necessary to evaluate progress toward a goal in an objective manner.  A person’s subjective assessment cannot be measured; progress cannot be evaluated effectively prior to a final determination of success or failure.  This can lead to costly rework or abandonment of the goal.  Only objective measurement provides the feedback necessary to evaluate activity in progress.
Attainable:  Realistic expectations must be maintained; overly-aggressive goals are counterproductive.  If failure is assured, effort is futile and will not be expended; performance will be lowered by a goal intended to increase it!  Compare the scope, scale, and time required to the resources available before committing to a goal.  (This does not preclude “stretch” goals – those that are known to be difficult to achieve, but are nonetheless possible.)
Relevant:  The relevance of a goal refers to its alignment with higher-level goals.  In a goal hierarchy, performance goals are the milestones on the path to achieving an ultimate goal.  Likewise, individual goals support the achievement of group or organizational goals.  For example, an individual performance goal of making five additional sales presentations supports the ultimate organizational goal of increasing sales revenue by 10%.  Ensuring a goal’s relevance prevents distractions from squandering resources.
Time-bound:  An effective goal requires a deadline.  Open-ended goals can easily be set aside.  With no deadline, lack of progress cannot be criticized; with infinite time to complete a task, imperceptible progress is an acceptable pace!  A deadline prompts dedication of resources and instills a sense of urgency.
 
PURE Goals
     PURE goals are positively stated, understood, relevant, and ethical.  Like the better-known SMART model, the terms are seemingly straightforward, yet some discussion is worthwhile.
Positively stated:  A goal should be defined by something to be attained rather than something to be eliminated.  “Stop” and “no” type statements – common in resolutions – are often associated with negative emotions that can be demotivating.  For example, a goal statement of “no coffee after noon” may prompt a feeling of failure every time a mid-afternoon pick-me-up is needed.  These feelings can lead to abandonment of the goal.  Alternatively, “three cups of coffee per day, maximum” can limit total consumption even when afternoon doldrums seem to require coffee to survive.  In many cases, the positive statement more accurately reflects the intention of the goal being set.  Positive emotions associated with desired behavior change encourage further improvements.
Understood:  Understanding the goal is necessary to guide activities toward its achievement.  This is particularly important in group settings; a shared understanding of the goal is essential for each individual to make appropriate contributions according to their roles.  Stated another way, a common understanding ensures that performance matches expectations.
Relevant:  When using three goal-setting models in conjunction, some overlap is not surprising.  See “Relevant” in the SMART goal section for a discussion of relevance.  Alternative formulations of the SMART and PURE models sometimes use “realistic” for this component instead.  See “Attainable” in the SMART goal discussion to see how this, too, overlaps other components of the models.
Ethical:  We hope that individuals and organizations always subscribe to the highest ethical standards, but this cannot be assured.  The limits of “ethical” behavior are dependent upon the culture in which it is evaluated, making this term more fluid than many of us would like.  In the most basic terms, pursuance of an ethical goal should bring no harm to others through deliberate action, inaction, or deception.
 
CLEAR Goals
     CLEAR goals are challenging, legal, environmentally sound, agreed, and recorded.  Again, the component terms seem straightforward, but the context warrants some discussion to ensure clarity.
Challenging:  Challenges motivate people to perform at their best.  This is one of the key reasons for setting goals!
Legal:  Consideration of legality should extend beyond the typical interpretation.  In addition to statutory law, there are various levels of regulatory control.  These include federal and state levels, but additional regulations may also be imposed by county and municipal ordinances.  A personal goal may require consideration of HOA (Homeowner Association) rules, while corporate governance guides organizational goal-setting.  Various other sources of potential conflict may also exist; the interpretation of “legal” should be kept loose to encourage assessment of goals in relation to the entire spectrum of rules and regulations.
Environmentally sound:  The obvious interpretation of environmentally sound is to have a neutral or positive impact on the natural environment.  Interpretation of this term can also be expanded to improve goal-setting.  For example, the environment to be considered could be an office area.  A goal of increasing collaboration among cross-functional groups may inspire changes to the office environment that lower productivity within these groups.
Agreed:  Agreeing on a goal ensures that all members of a group are working in unison toward its achievement.  In some cases, agreement from individuals that are not actively engaged in pursuance of the goal can be valuable, even critical, to success.  This often takes the form of supportive behavior, or the cessation of counterproductive behavior.  For example, one spouse can support the dieting goals of the other by stocking the kitchen with healthy food options or keeping sweets out of sight to minimize temptation.
Recorded:  Documenting details of a goal serves two key purposes.  First, it provides an “anchor;” referencing charter documentation can guide activity, ensuring that efforts remain aligned with the goal.  Second, it can be used as a commitment device.  Disclosure creates expectations that motivate those responsible to perform.
     Exhibit 1 provides a reference to facilitate recall of important aspects of the three goal-setting models that comprise the John Whitmore Model.
Overprescription and Remediation
     Research has shown that goal-setting offers benefits in motivation and collaboration.  However, the opposite side of the goal-setting coin – the risks associated with goal-setting – is often ignored.  Failing to account for potential negative impacts can lead to counterproductive goal-setting, or overprescription.
     Setting too many goals creates unrealistic expectations; limited resources are incapable of achieving all of the goals set.  In a business setting, this often leads to disengagement of employees.  As failure seems inevitable, efforts are abandoned, jeopardizing all of the goals.
     If goals are too narrowly focused, efforts to achieve them may be detrimental to overall performance.  That is, negative impacts may be induced outside the scope of a goal’s context.  While the negative impacts may exceed the benefits gained from pursuing the goal, the situation leaves accountability for results ambiguous.  For example, pursuing a cost-reduction goal may limit service providers’ ability to satisfy customers.
     Conflicting goals are a common symptom of overprescription.  They can lead to unethical behavior, as those responsible attempt to maintain the appearance of goal achievement.  Conflicts may render goal achievement impossible, but an organization’s incentive structure and culture may entice some to pretend otherwise.
     Aggressive goals may encourage risk-seeking behavior beyond what was intended.  Risky behavior can be seen in financial markets, production environments, transportation, or any other goal-setting context in which challenging is taken to an extreme.
 
     To minimize the risks of goal-setting, leaders must carefully analyze each goal in relation to all other goals.  Though each goal may be independently SMART, PURE, and CLEAR, the entire set of goals must be reviewed to minimize conflicts and other potential negative impacts.
     Remediation of goal overprescription in an organization relies on a culture of ethical behavior, quality, customer satisfaction and so on.  These positive attributes must be valued more than “hitting the numbers.”  Hitting the numbers is a short-term strategy, while positive behaviors foster long-term success; this belief should be reinforced regularly in documentation and in deeds.
     A critical element of such a culture is the ability to point out deficiencies in the goal-setting program without fear of reprisal.  Also, missing targets that are unrealistic, conflicting, or otherwise detrimental to overall performance should not be treated as failures.  To support this, goals should be prioritized at the outset.  Doing so allows team members to seamlessly shift resources to support the most important work as circumstances change.
     In a parallel to goal prioritization, tradeoffs in performance deemed unacceptable must be defined as part of the goal-setting exercise.  In the example above, the cost-cutting target could have been made subject to a minimum customer satisfaction score.  In this way, expectations are made explicit, spurring creative problem-solving when targets are in danger of being missed.
     Ordóñez, et al pose a series of questions that should be asked prior to setting a goal and suggest ways to reduce the related risk.  A summary of this information is provided in Exhibit 2.  A reference to the components of the Whitmore model to which each question is most closely correlated has also been included.
     The additional steps required for effective goal-setting are frequently overlooked.  However, the attention paid to these activities can be the difference between smooth sailing and running aground.
 
Related Posts
     The subject of goal-setting is closely related to topics covered in previous installments of “The Third Degree.”  Reviewing the following posts in light of the goal-setting discussion could be helpful.
  • Beware the Metrics System” [Part 1 (28Aug2019), Part 2 (11Sep2019)].  The requirement that goals be measureable comes with its own challenges.  These posts aid the selection of appropriate metrics to evaluate progress without inducing negative impacts.
  • Unintended Consequences – It’s the Law” (16Nov2022) and “Revenge ON the Nerds” (30Nov2022) further illuminate the importance of pursuing the “right” goals and potential repercussions of giving short shrift to goal-setting.
  • The “Making Decisions” series overlaps goal-setting in various ways.  When pursuing agreement, the posts discussing group decision-making may be particularly helpful:
     “Vol. IV:  Fundamentals of Group Decision-Making” (20May2020) and
     “Vol. V:  Group Techniques” (3Jun2020).
 
     For additional guidance or assistance with goal-setting or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] The Decision Book.  Mikael Krogerus and Roman Tschäppeler.  Norton & Company, 2018.
[Link] “Goals Gone Wild: The Systematic Side Effects of Over-Prescribing Goal Setting.”  Lisa D. Ordóñez, Maurice E. Schweitzer, Adam D. Galinsky, and Max H. Bazerman.  Perspectives; Academy of Management, February 2009.
[Link] “Three models to help you set goals.”  Rob Pillans.  PlanetConsulting, September 14, 2021.
[Link] “Goal setting acronyms — do you know all of them (SMART, PURE, CLEAR + SMEAC)?  Myroslava Zelenska.  Medium, August 31, 2021.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Reflections & Projections 1]]>Wed, 28 Dec 2022 15:30:00 GMThttp://jaywinksolutions.com/thethirddegree/reflections-projections-1     As our calendar expires once again, many of us will be inspired to reflect on our journeys and to look forward to new adventures.  “The Third Degree” is not immune to this drive; this installment serves that very purpose.  It will not be mere nostalgia, however.
     It is in the spirit of continuous improvement that previous installments will be revisited, referencing material published since their release.  Resources that were simply missed in the initial telling will also be shared.  This is a mostly chronological journey to simplify navigating the archive to review the original articles.
     As the title suggests, scanning the frontier, as well as updating previous installments, is expected to be a periodic endeavor.  Despite the title, this is not the first attempt at doing so.  See “Hindsight is 20/20.  Foresight is 2020” (1Jan2020) for additional updates to the earliest installments of “The Third Degree.”  These and future updates can be found under the “uncategorized” heading in the archive.
 
Reflections
     The “skills gap” remains a popular topic, though discussion has been morphed by a more-comprehensive labor shortage.  Precipitated by the COVID-19 pandemic, labor shortages impact nearly every sector of the economy.  This situation extends, and makes more explicit, the overlap of discussions in “The Skills Gap Fallacy” series [Part 1 (7Feb2018), Part 2 (14Feb2018), Part 3 (21Feb2018)] and “Avoid the Spiral into Oblivion” (28Feb2018).  We will not delve into the implications of work-from-home (WFH) policies, though they also contribute to staffing woes in many cases.
     While the number of articles pertaining to “the skills gap” and other staffing difficulties remains high, few offer anything unique.  Even the highest-quality advocacy is largely ineffective because it “preaches to the choir” without gaining significant exposure to a broader audience.  The grim performance of this “movement” notwithstanding, a few items warrant a second look.
     During the Leadership Exchange at FABTECH 2019, moderator Kord Kozma of Nidec Press & Automation made the following declaration:
We need to make a call of action to [manufacturers] because you are the only ones who can fix it. The government is not going to fix it for you, schools aren’t going to fix it for you. It’s up to you to change what you’ve been doing. You need to think differently. You need to act differently, invest differently, and lead differently.”  (Quoted in the January 2020 issue of Manufacturing EngineeringUp Front” column.)
This echoes the sentiment of “The Third Degree,” possibly with a larger megaphone.  Still, little evidence of this transformation in industry is available, though the occasional apprenticeship program announcement keeps hope alive.
     One possible outreach to an “extra-choir” audience was provided by “The Indicator from Planet Money” on December 8, 2022 (E1250).  In this podcast episode, two quotes from the Federal Reserve’s latest Beige Book were shared:
“One company that was looking to hire an experienced worker decided to hire an entry-level worker instead and just pay for the training.” – Richmond Fed
“Finding qualified candidates was reported as nearly impossible, so firms increased investments in training new hires.” – Atlanta Fed
     A previous “Indicator” episode was discussed in “Avoid the Spiral into Oblivion;” it, too, pertained to the Fed’s release of a Beige Book.  The two issues discussed are related, but must not be conflated.  The availability of labor and employers’ willingness to pay for it are very different problems.  Each is worthy of discussion, but require independent solutions.
     To encourage and facilitate development of apprenticeship programs, Plant Services published “How to leverage apprenticeship programs” in the September 2021 issue.  Unfortunately, the author gives far too much credit to registered apprenticeship programs and “educational” institution partnerships.  As stated previously, established programs and partnerships may be advantageous, but should not be the default solution to staffing and training shortfalls.  Every organization must assess its circumstances and explore its options before the “best” path forward can be chosen.
     Many of the registered or state-approved programs are akin to “workforce engineering,” which, as a March 2022 guest editorial in Manufacturing Engineering points out, “is not a realistic solution to the skills gap.”  Instead, we must “ignite the already existing and yet dormant fuel of curiosity inside of young minds.”  This expresses sentiment similar to that of Exupéry, quoted in “Viva Manufacturing!
     Developing curiosity and other forms of intrinsic motivation should be a major component of any educational or promotional endeavor.  As these efforts become more effective, the remaining tasks become easier.  A key advantage of a customized apprenticeship or similar program is that participants are able to pursue their interests without the burden of additional requirements that they deem unnecessary, uninteresting, or otherwise off-putting.
     One unnecessary effort is to attach a new label to jobs created by technological change.  The term “new collar” has been promoted by a former IBM executive (“The New Collar Workforce.”  IndustryWeek, March/April 2020).  Worse than being unnecessary, it could be counterproductive.  Labels like this can be confusing and divisive, creating stigma for some of the same people it is intended to attract.
     By the time the confusion and stigma fade, the label becomes meaningless.  When do “new collar” jobs become, simply, “jobs” and do we need new collars for every significant technological advancement?  Referencing “collars” to differentiate between professional responsibilities represents an antiquated mental model; the practice should be abandoned.  That is, of course, unless you’d like to see the pet care industry advertising “dog collar” jobs.  Let’s end this before it gets any more ridiculous!
 
     “Sustainability Begins with Succession Planning” (11Apr2018) approached staffing concerns from a more strategic point of view.  Early this year, strategy + business published “The new rules of succession planning” (February 7, 2022), outlining a 3-step process for choosing future leaders.  The third step, “structure the process to mitigate bias,” aligns with the thrust of the previous post – avoid simply choosing a “carbon copy” of the outgoing leader.  Maintaining objectivity is critical to choosing leaders capable of guiding an organization through inevitable challenges.
 
     “The High Cost of Saving Money” (2Jan2019) presented some counterproductive actions that businesses take to lower costs.  One example cited is when purchasing decisions are based exclusively, or nearly so, on the initial cost (“sticker price”) of an item.  In “Lowest Lifecycle Costs vs. Lowest Price” (Manufacturing Engineering, June 2020), Mazak Corp.’s CEO encourages buyers to consider the total cost of ownership (TCO) of assets.  TCO accounts for operational efficiencies (e.g. energy, material), reliability (e.g downtime, degrading performance), maintainability (e.g. frequency and cost of service), training needs, and any other factors that influence the cost of acquiring, operating, and disposing of an asset.  Employing TCO analysis is an effective method of avoiding the irony of increased costs caused by reduced spending.
 
     “Refining the Art of Asking Why” advances the case for the supremacy of questions set forth in “The Art of Asking” (22May2019).  The author demonstrates how the “The Shainin System” (23Sep2020) goes beyond 5Why analysis to formulate questions that lead to solutions of root causes, rather than merely addressing symptoms.
 
     The primary objective of “Lightweight Product Design – An Introduction to the How and Why” (19Jun2019) was to draw attention to the trade-offs involved in lightweighting.  For an exposition on some methods of balancing mechanical design objectives, see “5 Techniques for Lightweighting:  Doing More with Less” (Tech Briefs, January 2020).
 
     Several uses for plant layout drawings were offered in “Commercial Cartography – Vol. III:  Facility Layout or Floor Plan” (6Nov2019).  Readers unconvinced of the value of layouts are referred to “One shot to get the right plot plan” (Plant Services, September 2021).  One only has to get as far as the subtitle for a good reason to reconsider:  “Everything from maintenance to the performance of the plant relies on a good layout.”
 
     “Troubleshooting is a Six-Sense Activity” (4Dec2019) describes how all of our senses can be applied to troubleshooting process or equipment issues.  It is often assumed that this will be done by maintenance technicians, but that need not be the case.  When troubleshooting is performed – in full or in part – by “front-line workers,” it is often more efficient, reducing downtime and leading to more robust solutions.  To aid their involvement, Plant Services offers “The operator’s guide to successful troubleshooting” (June 2018), in which a 7-step process is outlined.  The six senses are primarily involved in the first two steps, which involve defining the problem and collecting information.  They are also used in the testing phase, as comparisons with prior performance will be made reflexively.  Though it is presented as the operator’s guide, the structure presented is useful in any role.
 
     Have your meetings improved in the three years since “Meetings:  Now Available in Productive Format!” (18Dec2019) was published?  For most, sadly, the answer is “no.”  For many, the definition of “meeting” may have drastically changed; fortunately, how groups can conduct productive meetings has not changed substantially.  Brief introductions to some noteworthy additions to the conversation follow.
Focus Group” (PM Network, March/April 2020) shares input from several project managers on how to keep meetings on topic.
The Surprising Science Behind Successful Remote Meetings” (MIT Sloan Management Review, May 21, 2020) cites research that is even more pessimistic than that cited in “Meetings.”  The author claims that “only around 50%” of meeting time is serving its intended purpose and that remote meetings are even less effective.  Aside from the inevitable use of technology, the advice given is remarkably similar to the prior discussion of meeting improvement.
How to boost people’s energy and productivity during meetings” (strategy + business, June 14, 2021) targets the group’s leader or facilitator.  As the title suggests, it is focused on maintaining engagement once a meeting has commenced.  It is important to remember that proper preparation is also critical to achieving this goal.
End your meeting with clear decisions and shared commitment” (strategy + business, September 13, 2021) reinforces the need for a decisive end to a meeting.  That means that everyone understands what was decided and who is responsible for each action item.  Distributing meeting minutes reinforces commitment and accountability.
Turn your meetings into jam sessions” (strategy + business, October 17, 2022) proposes a free-form period to begin meetings.  The exploration of nuance it is intended to allow is usually conducted outside meetings; the time limit proposed may not be sufficient to provide clarity for all team members.  If you choose to experiment with this approach, be sure to include it on the agenda!
 
     The attention paid to digital tools has only increased since the series run in “The Third Degree” in early 2020.  Occasionally, an article makes me realize I could/should have made some points more explicitly.  Systems as complex as an entire city were mentioned in “Double Vision:  Digital Twins Materialize Operational Improvements” (29Jan2020), but the need for linking digital twins of various types to monitor overall performance was only implied.  “How Digital Twins Are Reinventing Innovation” (MIT Sloan Management Review, January, 14, 2020) helps close that information gap.
     Similarly, the post may have benefitted from further discussion of the concept of synchroneity of physical-digital twin pairs.  This realization was aided by “Why Your Digital Twin Approach Is Not Built to Last and What to Do About It Now” (ReliabilityWeb).  Maintaining synchroneity of the twin pair requires updating digital models to reflect real-world conditions.  Nominal conditions and assumptions upon which models are built may require adjustment at installation and throughout the physical asset’s operational lifecycle.  In addition to allowing remote control of a physical asset, synchroneity improves performance monitoring and prediction, enabling condition-based maintenance.
     Virtual Reality (VR) (12Feb2020) and Augmented Reality (AR) (26Feb2020, 11Mar2020, 25Mar2020) are also attracting increasing attention.  As the technologies continue to improve, the arguments made for them in the series become stronger, but do not change significantly.  One note to make is the addition of an “umbrella” term, Extended Reality (XR), “to describe the technologies that merge the real and virtual worlds.” (“Advancing Ahead of the Architectural Curve.”  Syracuse University Magazine, Fall 2022).  Although this term is redundant, it may be useful as a search term as its use grows.
 
     “Effective Leaders Decide About Deciding” (MIT Sloan Management Review, April 27, 2022) presents a 2 x 2 matrix to help leaders decide how to delegate decisions and communicate responsibilities.  This approach can be used to delegate to individuals or groups.  As such, it is worth considering in conjunction with the “How should a decision-making standard be structured?” section of “Making Decisions – Vol. IV:  Fundamentals of Group Decision-Making” (20May2020).
 
     “Learning Games” (30Dec2020) presented some benefits of a less-formal approach to education and training when appropriate for the subject matter and target audience.  Additional references are cited below; it is left to the reader to evaluate games’ potential for intended participants.
Teaching Sustainability Leadership in Manufacturing: A Reflection on the Educational Benefits of the Board Game Factory Heroes.”  (Mélanie Despeisse, 2018)
The Beer Game
Teaching Energy Efficiency in Manufacturing Using Gamification: A Case Study.”  (Mélanie Despeisse and Peter Lunt, 2017)
An Executive Decision-Making Game Designed to Observe Executive Decision-Making Behavior.”  Richard Wayne Nicholson, 1961.
 
     “Are Mentors Modeling Toxic ‘Ideal Worker’ Norms?” (MIT Sloan Management Review, October 12, 2022) bridges subjects of two previous posts:  “Sustainability Begins with Succession Planning” (11Apr2018) and “Managerial Schizophrenia and Workplace Cancel Culture” (9Mar2022).  If mentors advocate “going with the flow,” toxic workplace cultures and counterproductive behaviors are perpetuated.  Real change agents value true diversity over lip service.
 
     A fondness for analogy has been openly shared throughout “The Third Degree;” “Eight Analogical Levels of Operations Management” (24Aug2022) provides a particularly blatant example.  Although it did not fit well within the 8-level framework presented, “Max-Q:  between too much and not enough” (Plant Services, July 2021) provides another aeronautical analogy for management that is worthy of consideration.  The Max-Q concept is applicable at each of the eight analogical levels of Operations Management.
 
     “How to turn ESG policy into ESG practice” (Plant Services, October 2022) cites a survey to which 45% of respondents identified “undefined financial benefits” as one of “the biggest obstacles to achieving ESG goals.”  This should have been referenced in “Sustainability vs. Profitability – A ROSI Outlook” (14Dec2022) to help explain why an organization may want to conduct a ROSI analysis.  Unfortunately, this article was in my reading queue at the time that installment was written and posted.  It does, however, expose some of the motivation for writing posts such as this one.

Projections
     Future installments of “The Third Degree” are expected to follow an established pattern.  They will aim to:
  • Cover a wide range of topics relevant to manufacturing and service operations.  See “Categories” in the sidebar to filter posts.
  • Supplement overviews of current thinking with alternative viewpoints to spur creative problem-solving.
  • Enlighten the novice and refresh the seasoned practitioner.
  • Inject levity where possible to keep “dry” subjects interesting.
  • Provide updates to discussions of important topics.
     An important characteristic of “The Third Degree” is what you will not get:  AI-generated content.  Despite impressive advancements that have been the focus of much attention in recent weeks, artificial intelligence remains devoid of human insight, empathy, and ethics.  These are at the core of JayWink Solutions and “The Third Degree;” thus we will continue to offer the genuine article.
 
     Thank you for being my “choir.”  Have a happy and healthy holiday season and a glorious new year!
     Please leave feedback, suggestions, and pleas for assistance in the Comment section, contact JayWink Solutions by the method of your choosing, or schedule an appointment.
 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Sustainability vs. Profitably – A ROSI Outlook]]>Wed, 14 Dec 2022 15:30:00 GMThttp://jaywinksolutions.com/thethirddegree/sustainability-vs-profitably-a-rosi-outlook     Claims about the impact of sustainability initiatives – or the lack thereof – on a company’s financial performance are prevalent in media.  Claims cover the spectrum from crippling, through negligible, to transformative.  Articles making these claims target audiences ranging from corporate executives to non-industry activists, politicians, and regulators.  Likewise, the articles cite vastly differing sources to support claims.
     These articles are often rife with unsupported claims and inconsistencies, are poorly sourced, poorly written, and dripping with bias.  The most egregious are often rewarded with “likes,” “shares,” and additional “reporting” by equally biased purveyors of “the word.”  These viewpoint warriors create a fog of war that makes navigating the mine-laden battlefield of stakeholder interests incredibly treacherous.
     The fog of war is penetrated by stepping outside the chaos to collect and analyze relevant information.  To do this in the sustainability vs. profitability context, a group from NYU Stern Center for Sustainable Business have developed the Return on Sustainability Investment (ROSI) framework.  ROSI ends reliance on the incessant cascades of conflicting claims, providing a structure for investigating the impacts of sustainability initiatives on an organization’s financial performance.
Mediating Factors
     The NYU team have identified nine “drivers of corporate financial performance,” dubbed mediating factors.  Financial drivers are relevant to all forms of management, but the nine mediating factors identified have been found to be particularly powerful in sustainability management.
     Each of the nine mediating factors is presented below, illustrating links between an organization’s sustainability efforts and its financial performance.
Operational Efficiency.  Operations managers are in perpetual pursuit of efficiency improvements; the desire to “do more with less” is never sated.  A focus on sustainability reveals opportunities for efficiencies that may be overlooked in “traditional” analyses.  For example, reducing the amount of process water required lowers costs of supply and consumption, processing, recycling and disposal.  Another example is the installation of efficient equipment reducing energy costs.  In both examples, lower demand for the commodity (water, electricity, gas, etc.) results in a subsequent reduction in required capacity of equipment and infrastructure, further enhancing operational efficiency.
Innovation.  Interest in sustainability shifts focus from creating new features to serving required functions more efficiently.  This can be achieved through design (e.g. materials, specifications), processing (e.g. methods, quality assurance), delivery (e.g. packaging, logistics), order processing, scheduling, or any component of a product or service offering, be it tangible or intangible.
Sales and Marketing.  Highlighting the “greenness” of a product or service attracts attention from the increasingly environmentally-sensitive marketplace, leading to increased sales.  Effective marketing of a sustainable product or service could increase market share, open new markets, or justify premium pricing.  If executed exceptionally well, the provider’s marketing may be supplanted by word-of-mouth and social media attention; reduced marketing effort is a direct benefit to the bottom line.
Customer Loyalty.  Establishing a reputation for sustainable quality spurs repeat purchases by environmentally-conscious consumers.  Loyal customers become advocates, “spreading the word” to encourage others to consume sustainably.  The dedication of these customers limits promotional expenses required of the provider to maintain, or even increase, market share.
Employee Relations.  Employees tend to feel a deeper connection with employers that demonstrate commitment to environmental and social responsibility.  This often manifests in higher morale and lower turnover that increases organizational performance.  Lower recruitment and training costs, fewer quality issues, higher productivity, and fewer labor shortages contribute to improved financial performance.
Supplier Relations.  People like to associate with those that share their values.  This doesn’t change when individuals are grouped into companies; organizations prefer to do business with other organizations that share their collective values.  Relationships are strengthened by a shared commitment to sustainable practices, reducing sourcing costs and increasing opportunities for further development of efficiencies.
Media Coverage.  Media exposure is an extension of marketing efforts.  Organizations aim to manage the message to the extent possible, maximizing their “good press” and minimizing detrimental effects of any negative attention they may receive.  If the media coverage received is overwhelmingly positive, the credibility it lends facilitates relationship-building with employees, customers, suppliers, and other stakeholders.
Stakeholder Engagement.  Sustainability initiatives “open doors” to the communities in which organizations operate.  Demonstrating a commitment to stakeholders’ safety, prosperity, and longevity weakens resistance to operations and presents opportunities for collaborations that yield benefits for the organization and the community at large.
Risk Management.  A focus on sustainability places an organization in a proactive stance.  Managers are more likely to recognize risks and develop strategies to avoid or minimize them and to mitigate the effects of occurrences.  Proactive measures provide financial benefits before issues occur (e.g. consistent operations, lower insurance premiums) and during disruptions (e.g. contingency activation, rapid recovery).  Proactive organizations are also better positioned to respond to changes in market conditions and regulatory schemes than less forward-thinking competitors, reducing the financial impacts of these inevitable incidents.
     Each of the nine mediating factors provides a context in which sustainability initiatives can be evaluated.  These are drivers of financial performance and, as such, should be referenced when evaluating projects and translating intangibles into monetary terms as part of the ROSI methodology.
 
ROSI Methodology
     The Return on Sustainability Investment (ROSI) methodology developed at NYU Stern Center for Sustainable Business is a five-step iterative process.  Discoveries in each step may require previous steps to be revisited in order to produce a thorough, accurate, and useful analysis.  An overview of the Return on Sustainability Investment (ROSI) Framework is presented in Exhibit 1.
     Each of the five steps of the ROSI methodology is presented below.  Descriptions are necessarily generic, as circumstances for unique organizations are highly diverse.
Identify sustainability strategies.  Compile a comprehensive list of sustainability-related activities in which the organization is engaged or is considering.  Activities may be explicitly aimed at sustainability (e.g. zero landfill solid waste target) or other objectives may have prompted activities with sustainability components (e.g. delivery route optimization).  Both types of activities contribute to sustainability objectives and, therefore, should be included in the analysis.  Advocacy groups publish reference materials and standards that can be used to guide this effort.  These include: Identify potential benefits of sustainability strategies.  Compile a comprehensive list of financial and social benefits that may be obtained via sustainability initiatives.  At this stage, benefits are typically recorded in nonmonetary terms, such as increased visibility, improved reputation in the community, or boosted morale.  Consider each of the nine mediating factors when brainstorming potential advantages of sustainability activities.  This list can be compiled for current and/or proposed activities.
Quantify costs and benefits of sustainability strategies.  To evaluate current activities, collect performance data to compare to previous practices.  Strategy proposals rely on estimates of performance derived from information acquired from industry reports, supplier recommendations, academic studies, or other sources.  Each of the nine mediating factors must again be considered; some creativity may be required to develop a consistent method of quantifying intangible or nonmonetary benefits.
Document assumptions and estimate uncertainties.  Uncertainty is inevitable.  Ignoring it is unacceptable.  By documenting assumptions that led to the conclusions drawn, estimates and strategies can be modified should flaws in those assumptions be discovered.  Defining ranges of likely values – for inputs or outputs – permits sensitivity analysis and scenario-based planning.  Without documentation of assumptions and uncertainty, deviations from anticipated results are inscrutable.
Calculate monetary value of sustainability strategies.  Various metrics can be used to evaluate the financial performance of sustainability activities.  Of course, a return on investment (ROI) percentage can be calculated.  Similar projects can be compared on the basis of earnings before interest and taxes (EBIT) or other standard accounting measure.  To compare projects that are vastly different in duration, scope, or size of investment, the best approach may be to calculate the net present value (NPV) of each.  Whatever metric or method of comparison is used, decision-makers must agree that it is valid and instructive.
     Repeat the process, incorporating revelations from previous iterations, until the information is stable.  A completed ROSI analysis can be used to make decisions on investments and to implement or refine sustainability strategies.  The scope of decisions can range from the management of a single process to corporate policy.
 
Challenges and Caveats
     Many of the articles mentioned in the introduction cite a correlation between organizations’ financial performance and their sustainability efforts.  This correlation is inferred to be positive, but is often unstated.  Allusions to a causal relationship are also common, though the direction of it may be the reverse of that implied.  Is it possible that the case study companies are able to implement broad-based sustainability strategies because they have been financially successful?
     Failing to explore – or even acknowledge – this possibility is not the only crime against journalism frequently committed.  There are too many bad examples to critique them all; instead, readers are simply urged to be vigilant.  Critical thinking is crucial to successful strategy implementation; putting analysts in this mindset is what makes the ROSI methodology valuable.
     Operations management cannot focus on sustainability to the exclusion of all other matters.  There are many other factors that influence decisions and complicate analysis.  These may include the organization’s financial position, local or global market conditions, supply chain stability, labor disputes, geopolitical turmoil, or any number of other factors, many of which can be difficult to define.  Without a complete picture of the environment in which a business operates, an outsider’s assertion that a sustainability program is an “easy” choice lacks merit.
     A significant factor complicating decision-making is that sustainability is not a purely technical matter.  It is heavily influence by economics and psychology.  The economic influences are more straightforward; some of them have already been presented.  Psychology enters the frame in the form of behaviors, particularly in the disparities between stated and revealed preferences.
     When asked to choose between two hypothetical options (say, paper or plastic straws), many respondents will declare a preference for the socially acceptable option (paper straw).  When faced with an actual choice, however, the socially acceptable option is often forsaken because it is inferior (functionally unacceptable).  The revealed preference is for plastic straws.  When this occurs, strategies must be reconfigured to address the realities of consumer behavior.
     Another psychological factor that should be considered is consumers’ perceptions of cost and quality of “green” products and services.  A reputation for higher quality or cost can increase sales and improve margins, whether or not the perceptions are accurate.  Hence, the danger consumers face of “greenwashing” – the practice of claiming environmental stewardship in excess of actual practices.  The opposite perception may warrant an education campaign, a component of the sales and marketing mediating factor.
     Analysts must be aware of traps to avoid falling victim.  A common trap is overstating benefits or underestimating costs by shifting responsibility or effects to a new location.  Converting a fleet of delivery vehicles from internal combustion to battery electric vehicles does not eliminate emissions.  Emissions are transferred from tailpipe to generating plant, but must still be accounted for.  This is a rearranging effect (see “Revenge ON the Nerds” for a discussion of rearranging and other revenge effects).
 
In Conclusion
     The Return on Sustainability Investment (ROSI) methodology can be used in two ways.  As a backward-looking analysis, performance of current or past practices can be evaluated.  This is useful for testing the validity of assumptions and accuracy of estimates used to make prior decisions in order to apply lessons learned to new project decisions.  As a forward-looking analysis, ROSI aids effective decision-making, improving forecasts of financial performance related to proposed sustainability initiatives by ensuring a more comprehensive review of relevant information.
     A ROSI analysis may reveal that a net negative financial impact on the organization can be expected for a particular implementation.  This does not preclude project execution, however; the implementation may not be optional due to safety, regulatory, or other issues.  The completed analysis informs managers of the magnitude of the financial impact and is the starting point for the development of recovery plans.  It is important to remember that ROSI and other analyses inform decisions, they do not make them.  Judgment, intuition, and creativity must still be applied to make effective decisions.  Without these human contributions, IBM’s Watson may as well be CEO.
     If an organization is actively engaged in lean practices and regularly prepares financial justifications for projects, executing the ROSI methodology is a natural extension.  It builds on existing skills by providing a lens through which attributes of sustainability are focused.  As such, ROSI is less a new method of analysis than an adaptation of existing analysis methods for a specific purpose.  Implementing an adaptation, rather than a new method, facilitates the selection of metrics to be used for assessment and comparison of alternatives.  ROSI can be executed within most existing organizational structures, including information systems, departmental boundaries, and resource allocations.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] “The Return on Sustainability Investment (ROSI): Monetizing Financial Benefits of Sustainability Actions in Companies.”  U. Atz, T. Van Holt, E. Douglas, and T. Whelan.  In Sustainable Consumption and Production, Volume II:  Circular Economy and Beyond.  R. Bali Swain and S. Sweet (eds.).  Springer Nature Switzerland AG, 2021.
[Link] “How to Talk to Your CFO About Sustainability.”  Tenise Whelan and Elyse Douglas.  Harvard Business Review, Jan/Feb 2021.
[Link] “Does Environmental Management Improve Financial Performance?  A Meta-Analytical Review.”  Elisabeth Albertini.  Organization & Environment, 2013.
[Link] “Most manufacturers say cost pressures are 'restricting' green strategies.”  Will Phillips.  Supply Management, 24 November 2022.
[Link] “Choosing Sustainability Is Easier Than You May Think.”  Michael Xie.  Forbes, 28 November 2022.
[Link] “Why industry is going green on the quiet.”  Cassandra Coburn.  The Guardian, 8 September 2019.
[Link] “Resource efficiency: Can sustainability and improved profit go hand-in-hand?”  The Manufacturer, 11 June 2019.
[Link] “5 Reasons It Pays to Implement Sustainable Manufacturing Practices.”  Thomas Insights, 30 October 2019.
[Link] “The incompatibility of benefit–cost analysis with sustainability science.”  M. Anderson, M. Teisl, C. Noblet, and S. Klein.  Sustainability Science, 13 September 2014.
[Link] “Financial Valuation Tool.”  GLOBAL VALUE tool showcase.
[Link] “Going Green to Make Green: Harnessing the Power of Sustainability in Business.”  Kristin Manganello.  Thomas Insights, 5 July 2018.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Revenge ON the Nerds]]>Wed, 30 Nov 2022 15:30:00 GMThttp://jaywinksolutions.com/thethirddegree/revenge-on-the-nerds     Unintended consequences come in many forms and have many causes.  “Revenge effects” are a special category of unintended consequences, created by the introduction of a technology, policy, or both that produces outcomes in contradiction to the desired result.  Revenge effects may exacerbate the original problem or create a new situation that is equally undesirable if not more objectionable.
     Discussions of revenge effects often focus on technology – the most tangible cause of a predicament.  However, “[t]echnology alone usually doesn’t produce a revenge effect.”  It is typically the combination of technology, policy, (laws, regulations, etc.), and behavior that endows a decision with the power to frustrate its own intent.
     This installment of “The Third Degree” explores five types of revenge effects, differentiates between revenge and other effects, and discusses minimizing unanticipated unfavorable outcomes.
     Revenge effects could easily be confused with side effects, but there is an important difference.  Side effects are incidental to the main effect, unrelated to objectives of the decision or action, and may be temporary.  Revenge effects, on the other hand, are directly related and in opposition to objectives.  Additional action must be taken to counter revenge effects, lest the main objective be negated.
     Trade-offs are similar to side effects, in that an undesirable outcome must be accepted in order to attain the main effect.  Usually indefinite, trade-offs can be accepted as reasonable cost of achieving the main objective or additional action can be taken to mitigate the undesired effects.
     A reverse revenge effect is an unexpected advantage of a technology or policy implementation in Tenner’s terminology.  In the generic terms of unintended consequences, this is a serendipitous outcome.

Five Types
     Revenge effects are activated by various mechanisms depending on the technology involved, who or what is effected, and how.  Five types of revenge effects are described below; examples are provided to clarify the differences.
     When the solution to a problem causes the very same problem, or multiplies it, the “new” problem is a regenerating effect.  The “solution” regenerates the problem.
Example:  Settlers occupying lands with harsh winter climates built log cabins for greater protection from the elements than an animal hide can provide.  When a fire used to heat the cabin gets out of control, damaging the structure, the frontiersman is exposed to wind and rain in addition to frigid temperatures.  If the fire is successfully contained in a stone fireplace, smoke and ash are still present in the air and burns may result from tending the fireplace.  In either case, the fire causes regenerating effects that impact the health and safety of the cabin’s inhabitants.
     Increasing the capacity of a system invites increased utilization.  The result is stagnant performance brought to you by recongesting effects.
Example:  As use of mobile phones increased, performance dropped.  Service providers countered this by building additional towers and expanding coverage.  The increased performance attracted new cellular customers; the performance drop repeated… several times.  Data transmission followed a similar pattern with the transition to smartphones.  Recongesting effects continue to occur until the system’s capacity exceeds the demands placed on it.
     Repeating effects commonly occur when introduction of technology changes behavior.  Rather than simply facilitating a task so it is less time-consuming, the availability of a tool or aid leads to more frequent repetition of the task.  The time spent on the task, using the tool or aid, may match or exceed that spent before the technology was developed.
Example:  Lawn care requires much more time and energy when performed with a walking mower and manual trimmer than with a riding mower and string trimmer.  The reduced effort required encourages more frequent grooming, raising residential landscaping standards.
     Implementation of technology may raise standards of performance or encourage expansion of its use beyond its original intent.  The resulting challenges are recomplicating effects.
Example:  Computer numerical control (CNC) substitutes machine control for human operation to reduce errors and improve consistency in manufacturing.  Coordinate measuring machines (CMM) provide reliable verification of dimensions.  The increased precision and repeatability of these machines inspire more stringent design specifications to perform the same function as was achieved with manual machines and vernier calipers.  The difficulty of obtaining “acceptable” results has been maintained or increased by way of recomplicating effects.
     Likely the most common, rearranging effects are also the most recognizable.  If you have ever played Whac-A-Mole, you are familiar with the concept of rearranging effects – addressing a problem in one place only causes it to “pop up” in another.
Example:  The shifting of responsibility for product flaws between two “quality” groups (similar to another game – Volleyball), discussed in “Beware the Metrics System,” exemplifies rearranging effects.  Each group’s metrics performance depends on the other’s in a zero-sum game.
     Similarly, flood control measures do not prevent flooding; they simply relocate it to other (unprotected) areas.  Air conditioning does not eliminate heat; it transfers it elsewhere.  Trash barges move refuse from large cities to less populated areas; they do nothing to address excessive waste.  Rearranging effects are ubiquitous; you have probably noticed many, though you did not have a name for them.
 
Respect the Law Redux
     Several strategies for minimizing undesirable outcomes were presented in “Unintended Consequences – It’s the Law.”  As a subset of unintended consequences, each of the strategies described is applicable to revenge effects.  One may stand above the rest, however, as the key to preventing revenge effectshigher-order thinkingRevenge effects are multidimensional, involving changes in standards, behaviors and norms, locations, magnitudes, and more.  These dimensions are orthogonal to the “layered” nature of unintended consequences, further complicating the task of predicting outcomes.
 
     Discussions of unintended consequences typically presume that these effects are in addition to the intended outcomes.  However, this may not be the case.  Failure to attain desired outcomes may be due to poor planning, flawed execution, or the “bite” of revenge effects.  While the first two may require iteration, the third may require an entirely new action plan, possibly more urgent than before.
     In other scenarios, effects are transformed from acute to chronic problems.  As development of safety apparatuses and medical interventions increase survivability of many accidents and diseases, the effects of these occurrences become “spread in space and time.”  A helmet that “reduces” the consequences of a head injury from death to brain damage leaves the victim in a state of diminished capacity for the remainder of his/her life.  Likewise, a cardiac patient may recover from a heart attack, but heart disease will forever plague him/her.
     The specter of potential revenge effects is ever-present.  Thorough analysis is essential to effective decision-making; contingency planning is required to minimize the impacts of effects that materialize.  Development of several alternatives may be necessary to accommodate various combinations of repercussions that could result.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] Why Things Bite Back:  Technology and the Revenge of Unintended Consequences.  Edward Tenner. Alfred A. Knopf, 1996.
[Link] “When Technology Takes Revenge.”  Farnam Street.
[Link] “The revenge effect in medicine.”  Balaji Bikshandi.  InSight+, February 6, 2017.
[Link] “Revenge effect.”  Academic Dictionaries and Encyclopedias, New Words.
[Link] “Revenge effects.”  Stumbling & Mumbling. January 24, 2014.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Unintended Consequences – It’s the Law]]>Wed, 16 Nov 2022 15:30:00 GMThttp://jaywinksolutions.com/thethirddegree/unintended-consequences-its-the-law     The Law of Unintended Consequences can be stated in many ways.  The formulation forming the basis of this discussion is as follows:
The Law of Unintended Consequences states that every decision or action produces outcomes that were not motivations for, or objectives of, the decision or action.”
     Like many definitions, this statement of “the law” may seem obscure to some and obvious to others.  This condition is often evidence of significant nuance.  In the present case, much of the nuance has developed as a result of the morphing use of terms and the contexts in which these terms are most commonly used.
     The transformation of terminology, examples of unintended consequences, how to minimize negative effects, and more are explored in this installment of “The Third Degree.”
Unanticipated Consequences of Social Action
     Sociologist Robert K. Merton popularized the concept of unanticipated consequences in his 1936 paper “The Unanticipated Consequences of Purposive Social Action.”  In this paper, Merton presents several causal factors; these are explained, in brief, below.
     Incomplete information creates “blind spots” in decision-makers’ understanding of likely or potential outcomes.  Merton does not use this term; it is the conjunction, for simplicity, of two concepts.  Merton differentiates between knowledge possessed (the “ignorance” factor) and the knowledge conceivably obtainable.  Ignorance is a second-order cause; the distinction has little value at this level of discussion.  Therefore, incomplete information can be used to describe any lack of knowledge, regardless of its cause.
     Drug and alcohol prohibition provides the quintessential example of unintended consequences.  Lawmakers do not account for the ways in which people will respond to legislation.  Black markets are established, where the elimination of open competition allows organized crime to flourish.  As profits from illicit sales rise, escalating violence follows.  Systems develop to facilitate criminal activity, leading to an expansion into other enterprises – counterfeiting, human trafficking, or any other.  And then there’s the corruption…
     Even if decision-makers possess “perfect” (i.e. complete, accurate) information, the potential for error remains.  Several types of error could occur, including analysis, interpretation, and prediction errors.  All decisions are susceptible to errors; this is one reason many are paralyzed by the need to make one (see the “Making Decisions” series).
     Transitioning to piece-rate wages in a manufacturing operation often suffers from error in analysis and prediction.  The wage structure may be changed with the expectation of an increase in employee wages and improved standard of living.  A simultaneous increase in profits, due to higher production rates, may also be anticipated.  However, an intense focus on quantity may result, causing workers to accept lower quality and relaxed safety standards.  The end result is lower sales, degrading conditions, and a decline in employee’s overall well-being.
     The “imperious immediacy of interest” refers to situations in which a sense of urgency prompts actions, sometimes extreme, without thorough consideration of risks or consequences.  Attention is focused on addressing the most pressing concern, while all other issues are essentially ignored.  Consider the case of a parent entering a burning building to rescue a child.  The imperious immediacy of a child in danger prevents consideration of the probability of death from smoke inhalation, roof collapse, etc. before running inside.
     The “basic values” of a country, family, or other cultural group can lead people to act in predetermined ways without consideration of potential outcomes or alternative actions.  The term “cultural norms” is often used to describe a similar phenomenon.  For an illustration of the impact of basic values or norms, consider the following example.
     In a society in which a woman is subjugated to the will of her father, caring for him in his elder years may be a primary responsibility.  She is likely to do so without hesitation, though it may deprive her of opportunities to create a fulfilling life for herself.  She may be unable to develop social ties, through marriage or otherwise, that ensure she has a caregiver in her time of need.
     Finally, public predictions can influence behavior to such an extent that the predictions do not come true.  This is known as a “self-defeating prophecy,” where awareness of a prediction prevents its fruition.  For example, the prediction of the extent to which ozone depletion would effect the earth and its inhabitants led to bans on the use of chlorofluorocarbon-based refrigerants and aerosols (CFCs).  Instead of continued depletion, the ozone layer has entered a recovery phase.
     As an aside, self-defeating prophecy has a better-known counterpart – the self-fulfilling prophecy.  In this case, the prediction influences behavior such that realization of predicted outcomes is accelerated.  For example, news of distress in a nation’s financial system may cause a “run” on banks, precipitating collapse.
 
Basic Terminology
     Consequences of a decision or action can be considered on two dimensions, as summarized in Exhibit 1anticipated/unanticipated and desirable/undesirable.  The anticipated/unanticipated dimension describes whether or not a certain outcome had been predicted.  It says nothing about how a prediction was made or why an outcome was not predicted (error, incomplete information, etc.).  Accuracy of predictions is a secondary matter for purposes of this discussion; decision-makers are credited for conducting an analysis, despite its imperfections.
     The desirable/undesirable dimension is highly subjective, requiring a normative judgment.  The perspective of a person is fundamental to this judgment; there may be strong disagreement regarding the desirability of an action or outcome (e.g. labor vs. management, homeowner vs. property developer, etc.)  In the public policy realm, officials may declare opposition to a decision while secretly finding it personally or politically beneficial.
     Consequences of a decision or action that are anticipated and desirable are the intended outcomes – the objectives and motivations for it.  Outcomes that are neither desirable nor anticipated are the unintended consequences.  Evolution and usage of this term is discussed further in the next section.
     Outcomes of a decision or action that are undesirable, but anticipated, are called “double effect” consequences.  Double effect consequences occur when anticipated negative effects of an action are deemed acceptable in light of the desirable outcomes expected.  For example, the displacement of valley residents is accepted to obtain the benefits of man-made “lakes” or reservoirs, such as hydroelectric power and drought resilience.
     Outcomes that are unanticipated, but desirable, are attributed to serendipity.  In our modern society, however, it is likely that some grandstander will take credit for any positive effects.  Worse, this ne’er-do-well will probably spout buzzwords like synergy, equity or other words s/he doesn’t understand.
 
Transformation of Terminology
     The problem of unintended consequences has been considered by a number of historical luminaries, including Niccolo Machiavelli, Adam Smith, Karl Marx, and Vilfredo Pareto.  Despite the intellectual might of his predecessors, Merton’s work seems to be the foundation of contemporary thought on the subject.
     Use of Merton’s term of art, “unanticipated consequences,” is uncommon today; it has been supplanted by “unintended consequences.”  In fact, Merton himself made this transition in later works.  He also dropped the word “purposive” from later versions of his seminal paper, claiming it is redundant (“all action is purposive”).  This is an odd claim, however, as it seems important to him to differentiate between conduct (purposive) and behavior (reflexive) in the original paper.
     Merton’s abandonment of purposive is peripheral to the current discussion.  It is mentioned only to demonstrate that the transition in terminology used in his work is not isolated to one word and seems to indicate evolving views or a new agenda.  We cannot be certain of his motivations, of course, but they are worth pondering.
     In modern usage, the term “unintended consequences” is typically invoked to imply that referenced outcomes were both undesirable and unanticipated.  Unfortunately, it can also be used to make intentionally opaque statements.  Those making such statements rely on a form of self-deception or, at minimum, a lack of critical analysis to manipulate an audience.  Receivers of the message are encouraged to interpret “unintended” to mean “unanticipated”, though that may not be true.  The unforeseen is presented as unforeseeable to avoid culpability.  Similarly, the messages are intended to convey to the audience that the speaker finds the outcomes objectionable – even if profiting from them – to project a favorable image.  While not exclusive to it, nefarious use of ambiguity is pervasive in the political arena.  For this reason, discussions of terminological distortion frequently return to examples in politics.
     As de Zwart points out, conflating “unintended” and “unanticipated” obscures the fact that many decisions are difficult or unpopular because undesirable outcomes are anticipated.  Opacity is often used to deflect or minimize responsibility for decisions or their consequences, particularly when there is no solid, logical defense (i.e. rational basis) to offer.
     Critics of a decision often cite unintended consequences as evidence of decision-makers’ incompetence, complacency, or indifference.  Doubts about the truth of claims that negative consequences were unanticipated may even prompt accusations of blatant malice.
     If decision-makers seek to distance themselves further from responsibility for actions, Merton provides additional replacement terms.  There are no longer intended outcomes (objectives) and unintended consequences; there are merely “manifest functions” and “latent functions.”  It is difficult to imagine a purpose to which these terms are better suited than providing political “cover” through opacity.  “Don’t blame me; it’s a latent function!
 
Respect the Law
     When it comes to unintended consequences, respecting the law begins with recognizing it as a legitimate concern and accepting responsibility for all outcomes.  To minimize negative effects, unintended consequences must be considered as thoroughly as objectives.  That is, both manifest functions and latent functions require analysis.  Several strategies for minimizing unintended consequences are described below.
  • Define a reversing mechanism.  Prior to implementing a decision, define the actions necessary to reverse it, should the negative effects be deemed unacceptable.  Definition of unacceptable, i.e. what conditions will trigger reversal, should be documented for all conceivable scenarios.
  • Invert thinking.  Focus on outcomes to be avoided, working “backward” to find solutions that achieve the desired objectives.
  • Develop a disconfirmation bias.  Actively seek evidence that contradicts assumptions or conclusions about likely outcomes of a decision.
  • Engage in higher-order thinking.  Consequences are often layered.  Each outcome propagates additional consequences – some positive, some negative.  To adequately assess a decision, define and evaluate as many layers of consequences as possible to determine the decision’s net effect.  A common “layer” to be considered is risk compensation, where a reduction of risk is followed by an increase in risky behavior.
  • Stay in your lane.”  Act only within the scope of your abilities, or “circle of competence;” seek advice from others on unfamiliar topics.  Be honest with yourself about the depth of your knowledge of relevant subjects; overconfidence leads to poor decisions and catastrophic consequences.
  • Do nothing.  If a net positive effect cannot be predicted with sufficient confidence, resist pressure to “Do something!” until an appropriate course of action can be defined.  (See “Making Decisions” – Vol. I and Vol. VII).
 
     Early use of “unintended” as a synonym for “unanticipated” may have been an innocuous substitution.  As the terminological transformation is near-complete, however, the term has come to represent purposeful ambiguity.  “Fuzzy” terminology, where a speaker’s meaning is not clear, makes disingenuous statements more defensible.  A lack of clear definition permits literal use of terms when an alternate understanding is common and vice versa.  If questioned, the misinformation is attributed to a simple misunderstanding, rather than obfuscation.  In an ironic twist, nefarious use of the term “unintended consequences” may, itself, be an unintended consequence.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] “The Unanticipated Consequences of Purposive Social Action.”  Robert K. Merton.  American Sociological Review; December, 1936.
[Link] “Unintended but not unanticipated consequences.”  Frank de Zwart. Theory and Society; April 12, 2015.
[Link] “The Law of Unintended Consequences.”  Mark Manson.  MarkManson.net.
[Link] “The Law of Unintended Consequences: Shakespeare, Cobra Breeding, and a Tower in Pisa.”  Farnam Street.
[Link] “The Law of Unintended Consequences.”  Lori Alden.  Econoclass, 2008.
[Link] “Unintended Consequences.”  Rob Norton.  EconLib.
[Link] “Unintended consequences.”  Wikipedia.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Blinded by the Light]]>Wed, 02 Nov 2022 14:30:00 GMThttp://jaywinksolutions.com/thethirddegree/blinded-by-the-light     An organization’s safety-related activities are critical to its performance and reputation.  The profile of these activities rises with public awareness or concern.  Nuclear power generation, air travel, and freight transportation (e.g. railroads) are commonly-cited examples of high-profile industries whose safety practices are routinely subject to public scrutiny.
     When addressing “the public,” representatives of any organization are likely to speak in very different terms than those presented to them by technical “experts.”  After all, references to failure modes, uncertainties, mitigation strategies, and other safety-related terms are likely to confuse a lay audience and may have an effect opposite that desired.  Instead of assuaging concerns with obvious expertise, speaking above the heads of concerned citizens may prompt additional demands for information, prolonging the organization’s time in an unwanted spotlight.
     In the example cited above, intentional obfuscation may be used to change the beliefs of an external audience about the safety of an organization’s operations.  This scenario is familiar to most; myriad examples are provided by daily “news” broadcasts.  In contrast, new information may be shared internally, with the goal of increasing knowledge of safety, yet fail to alter beliefs about the organization’s safety-related performance.  This phenomenon, much less familiar to those outside “the safety profession,” has been dubbed “probative blindness.”  This installment of “The Third Degree” serves as an introduction to probative blindness, how to recognize it, and how to combat it.
     There are three types of safety activity mentioned throughout this discussion of probative blindness (PB), often without being explicitly identified.  They are assessment, ensurance, and assuranceAssessment activities are susceptible to PB; these activities seek to update an organization’s understanding of its safety performance and causes of accidents, but may fail to do so.
     Ensurance activities are conducted to increase the safety of operations as a result of successful assessments (beliefs about safety have changed).  Assurance activities seek to increase confidence in the organization’s safety performance.  This may involve publicizing statistics or describing safety-related features of a system or product.
     The term “probative blindness” was coined by Drew Rae, an Australian academic, safety researcher, and advocate, and his colleagues.  It is used to describe activities that increase stakeholders’ subjective confidence in safety practices beyond that which is warranted by the insight provided or knowledge created about an organization’s actual safety-related performance.  Stated another way, activities that reduce perceived risk, while leaving actual risk unchanged, exhibit probative blindness.
     It is worthwhile to reiterate, explicitly, that PB is a characteristic of the activity, not the participants.  While use of the term in this way may not be highly intuitive, it will be maintained to avoid further confusion.  The relevance of the concept to the pursuit of safety, in any context, justifies tolerating this minor inconvenience.
 
     According to Rae, et al [2], “[p]robative blindness requires:
1. a specific activity, conducted at a particular time;
2. an intent or belief that the activity provides new information about safety; and
3. no change in organisational belief about safety as a result of the activity.”
     The specific activity could be a Fault Tree Analysis (FTA), Failure Modes and Effects Analysis (see the “FMEA” series), Hazard and Operability Studies (HAZOPS), or other technique intended to inform the organization about hazards, risks, and mitigation strategies.  The intent to provide new information differentiates the activity from pep talks, platitudes, and public affirmations.  Conducting an activity with the objective of acquiring new information does not ensure its achievement, however.
     A failure to provide new information is one mechanism by which the third element of probative blindness manifests.  It also occurs when the results or conclusions of the safety activity are rejected, or dismissed as faulty, for any reason.  This could stem from cognitive biases or distrust of the activity’s participants, particularly its leaders or spokesmen.
 
Precursors of Blindness
     There are several conditions that may exist in an organization than can make it more susceptible to probative blindness.  A “strong prior belief” in the safety of operations may lead managers to discount evidence of developing issues.  Presumably, the belief is justified by past performance; however, the accuracy of prior assessments is irrelevant.  Only current conditions should be considered.
     Preventing assessments of past performance from clouding judgment of current conditions becomes more difficult when success is defined by a lack of discovery of safety issues.  Deteriorating conditions are often ignored or discounted because an operation is “historically safe.”  Difficulty in spurring an appropriate response to a newfound safety issue, whether due to nontechnical (e.g. “political”) resistance, resource shortages, or other capacity limitation, may bolster the tendency to rely on past performance for predictions of future risk.
     A “strong desire for certainty and closure,” such as that to which the public assurance scenario, cited previously, alludes, may lead an organization to focus on activity rather than results.  For the uninitiated, activity and progress can be difficult to distinguish.  The goal is to assuage concerns about an operation’s safety, irrespective of the actual state of affairs.
     When organizational barriers exist between the design, operations, and safety functions, thorough analysis becomes virtually impossible.  Limited communication amongst these groups leads to a dearth of information, inviting a shift of focus from safety improvement to mere compliance.  Analysis is replaced by “box-checking exercises” that have no potential to increase knowledge or change beliefs about operational safety.  The illusion of safety is created with no impact on safety itself.
 
Blind Pathways
     Exhibit 1 provides a mapping of manifestations of PB.  Some of the mechanisms cited, such as failure to identify a hazard due to human fallibility, are somewhat obvious causes.  Others require further consideration to appreciate their implications.
     “Double dipping” is the practice of repeating a test until an acceptable result is obtained.  This term may be misleading, as it often requires many more than two attempts to satisfy expectations.  The more iterations or creativity of justifications for modifying parameters required, the more egregious and unscientific this violation of proper protocol becomes.
     Changing the analysis and post-hoc rationalisation can be used to end the iterative cycle of testing by modifying the interpretation of results, or the objective, to reach a “satisfactory” conclusion.  Real hazard and risk information is thus missing from analysis reports and withheld from decision-makers.
     Motivated skepticism involves cognitive biases influencing the interpretation of, or confidence in, analysis results.  Confirmation bias leads decision-makers to reject undesirable results, holding activities to a different standard than those producing more favorable results.  Reinterpretation may be attributed to normalcy bias, where aberrant system behavior is trivialized.  Seeking a second opinion is similar to double dipping; several opinions may be received before one is deemed acceptable.
     A valid analysis can be nullified by the inability to communicate uncertainty.  All analyses are subject to uncertainty; an appropriate response to analysis results or recommendations requires an understanding of that uncertainty.  If the analysis team cannot express it in appropriate terms and context, the uncertainty could be transferred, in the minds of decision-makers, to the validity of the analysis.
 
Alternate Pathways
     Accidents occur for many reasons; probative blindness is one model used to describe an organization’s understanding of the causes of an accident.  A brief discussion of others provides additional clarity for PB by contrasting it with the other models.  A summary of the models discussed below is shown in Exhibit 2.
     To restate, probative blindness occurs when safety analysis prompts no response indicating a change in beliefs about safety-relevant conditions has taken place.  Relevant information includes the existence of a hazard, risks associated with an acknowledged hazard, and the effectiveness of mitigation strategies related to an acknowledged hazard.
     The “Irrelevant” model of safety activity pertains to activities cited to demonstrate an organization’s concern for safety, though they are unrelated to a specific analysis or accident under investigation.  Citing these activities is often a “damage control” effort; spokesmen attempt to preserve an organization’s reputation in the face of adverse events.  In the “Aware but Powerless” model, activities are “neither blind nor safe.”  The organization is aware of safety issues, but responses to them, if undertaken, are ineffective.
     Each of the models discussed thus far include activities that, arguably, demonstrate concern for safety.  They differ in the influence that concern has on safety activity, but none improve accident prevention.
     Activities in the Aware but Powerless model, as made clear by its name, also demonstrate awareness of hazards within the organization.  Only one other does so – the “Lack of Concern” model.  In this model, both insufficient analysis and insufficient response to known hazards are present.  The underlying rationale is a subject for secondary analysis; overconfidence and callousness are possible motives for neglecting safety activity.
     In the final two models, activities fail to demonstrate either awareness or concern, suggesting the absence of both.  The direction of the causal relationship between the two deficiencies, if there is one, will vary in differing scenarios.  In the “Nonprobative” model, activities are not intended to discover causes of accidents or address specific safety concerns in any meaningful way.  Therefore, no awareness is generated; the absence of concern could be a cause or an outcome of pursuing nonprobative activities.
     The final model, and the simplest, is “Insufficient Safety Analysis,” wherein activities that could have revealed potential causes of accidents were not conducted.  The reasons for omission are, once again, the subject of secondary analysis that may reveal staffing shortages, lack of expertise in the organization, or other contributory factors.  Inactivity, like nonprobative activity, could be a byproduct of a lack of awareness or of concern.  The interplay of awareness, concern, and activity is presented pictorially in Exhibit 3.
Sight Savers
     Unfortunately, there is no silver bullet that will prevent probative blindness in all of an organization’s activities.  However, following a few simple rules will significantly improve visibility of safety-relevant conditions:
  • Commit to engaging in and following through on safety activities – assessments, mitigations, training, etc.  This commitment must be made at all levels of the organization to ensure sustained effort.
  • Maintain focus on safety; allow compliance to follow.  When the focus is on compliance, true safety cannot be assured.
  • Integrate safety reviews into all organizational activities to ensure product, process, and facility safety.
  • “Check your priors.”  Set aside preconceived beliefs about safety, historical safety records, and hubris.  It is more important to save people – employees, customers, the community at large – than to save face.
  • Show your work.  Double check it.  Just as taught in elementary school math class.
     In short, the best defense is a good offense.  Proactive elimination of the precursors of blindness is the first step in performing safety activities that are appropriate and effective.  Knowledge of blind pathways helps analysis teams recognize and avoid them.  Finally, safety professionals and decision-makers should always remember that shining a light on a subject is not always illuminating.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
References
[Link] [1] “Probative Blindness:  How Safety Activity can fail to Update Beliefs about Safety.”  A.J. Rae, J. A. McDermid, R.D. Alexander, and M. Nicholson.  9th IET International Conference on System Safety and Cyber Security 2014.
[Link] [2] “Probative blindness and false assurance about safety.”  Andrew John Rae, and Rob D. Alexander.  Safety Science, February 2017.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Be A Zero – Part 2:  Zero-Based Scheduling]]>Wed, 19 Oct 2022 14:30:00 GMThttp://jaywinksolutions.com/thethirddegree/be-a-zero-part-2-zero-based-scheduling     Another way to Be A Zero – in a good, productive way – is to operate on a zero-based schedule.  An organization’s time is the aggregate of individuals’ time and is often spent carelessly.  When a member of an organization spends time on any endeavor, the organization’s time is being spent.  When groups are formed, the expenditure of time multiplies.  Time is the one resource that cannot be increased by persuasive salespeople, creative marketing, strategic partnerships, or other strategy; it must be managed.
     “Everyone” in business knows that “time is money;” it only makes sense that time should be budgeted as carefully as financial resources.  Like ZBB (Zero-Based BudgetingPart 1), Zero-Based Scheduling (ZBS) can be approached in two ways; one ends at zero, the other begins there.
     One method of Zero-Based Scheduling (ZBS) parallels personal ZBB, ending with zero time.  An entire day is scheduled, including breaks, meals, and sleep.  Thirty-minute blocks of time are common, but can be varied as needed.
     Creating an effective zero-based schedule in this manner requires that task durations be known to a reasonable degree of accuracy.  Often, this is not the case; therefore, the first several iterations of ZBS may be less than satisfying.  Scheduling of repeated, or similar, tasks will improve with experience.
     Advocates of this scheduling method claim several benefits:
  • “Personal time is protected.”  Establishing a clear end time for “work” activities allows sufficient time to pursue personal interests that may otherwise be neglected or forsaken.
  • “Less “in-between” time is wasted.”  Short periods of time between scheduled tasks do not support “deep-dive” work on large or important projects.  A zero-based schedule includes smaller tasks in these time slots to maintain a high level of productivity.
  • “Decision fatigue is reduced.”  Preplanning a day’s activities eliminates the rushes of “what now?” anxiety that accompany unscheduled time.
  • “Focus on goals is maintained.”  A fully-booked schedule reminds the user what is important, discouraging frivolous or unproductive activities.
  • “Productivity is sustained.”  Defined durations for task completion minimize manifestations of Parkinson’s Law, where tasks take longer to complete than necessary, simply because additional time is available.
     While these benefits sound wonderful, some skepticism is in order.  The claims described above are somewhat misleading, overgeneralized statements that do not adequately reflect the diversity of tasks required by a broad array of professions.  Perhaps an independent novelist can maintain a schedule for writing, coffee-drinking, proofreading, dog-walking, and editing, but production-support and customer-service personnel, among others, have much less control over their daily activities.
     Protecting personal time requires boundaries that a schedule cannot create.  In a culture where “off-hours” work is expected by managers that do not respect subordinates’ personal time, a zero-based schedule will be quickly defeated and dismantled.  The same problem may exist within “working hours” if managers routinely redirect employees without regard for current work in progress.
     Wasted in-between time may be reduced with a zero-based schedule.  However, a more effective strategy is to group tasks such that fewer short in-between periods exist on the schedule.  Short-duration tasks can still be inserted when an unplanned in-between time occurs, such as when a task requires less time to complete than was predicted.
     Decision fatigue is more likely to be displaced than reduced.  That is, its experience is concentrated in the planning period rather than disbursed throughout the scheduled period.  Prioritization requires decision-making; comparable levels of fatigue are likely to be induced in either case, as it is largely dependent on the individual’s perception of the process.
     The existence of a schedule does not create, or even foster, focus.  Focus is an internal phenomenon, subject to the drives and distractions of the individual.  A schedule can merely provide reminders of previously-stated goals.  As such, it is a helpful tool, but it is not a fool-proof guide, as many advocates portray it.
     The existence of a schedule also cannot sustain productivity.  An individual’s productivity is effected by many factors, many of which are out of his/her control.  Circadian rhythms, illness, and fatigue are physical factors that can effect an individual’s ability to perform.  Personal issues and the task environment may also create distractions or suboptimal conditions that effect an individual’s productivity.
     The schedule acts as a reminder of where one wants to be, or believes s/he could be, in his/her work progression, but it cannot drive that progression.  It may ensure that one task ends and another begins, but it cannot ensure the quality or completeness of the work.
     Routine tasks can be scheduled with a high level of accuracy, resulting in reliable schedule compliance.  However, unique, original work, such as development projects of various types, may have highly unpredictable durations.  Discoveries may lead to new development paths that could not be pursued in a predefined schedule.  Compliance to a fully-loaded schedule is extremely difficult when engaged in this type of work.  In no way does this suggest that no plan should be made, only that it should be accepted at the outset that it will likely change.
     Requiring a fully-loaded schedule implies that unstructured time is wasted time.  Periods of thought and exploration that have not been predefined foster creativity and discovery that may be far more valuable than anything that could have been scheduled.  Spontaneity is squelched and opportunistic pursuits are precluded in a fully-loaded schedule; there is no chance to accommodate energy fluctuations or illness.  For example, a particularly strenuous – physically or mentally – session may require a recovery period before the next task can be effectively executed.  In this case, supporting well-being with a recovery period should be the highest priority, though it was not scheduled because it was not foreseen.  Schedule flexibility is key to overall productivity.
 
     The second ZBS method is a more effective alternative for many.  It parallels ZBB for business, beginning at zero; all time expenditures must be justified in order to be scheduled.  In an environment where an individual’s schedule is heavily influenced by external forces, this approach is integral to his/her productivity.  When several managers, project leaders, department heads, and coworkers can each reserve time on a person’s calendar, that person’s prospects for accomplishing anything of value can quickly evaporate.
     To maintain a zero-based schedule in this type of environment, an individual must have the autonomy to prioritize activities and to decline requests of his/her time.  In the production-support example cited above, unplanned downtime and improvement projects must take precedence over mindless meetings.
     In fact, ZBS is an effective way to eliminate pointless meetings.  Finding no value in attendance, attendees will deprioritize an unproductive meeting and remove it from their schedules.  As attendance dwindles, the meeting organizer is forced to accept the group’s perception, cancelling or reconstituting the meeting to provide value to participants.  Recurring meetings that have outlived their usefulness, or have diverged from their original intent, can be eliminated or realigned through this process.
     It should be clear that a zero-based schedule may have long periods of unallocated time.  This does not suggest, however, that the person is aimless or squandering time.  It simply means that the person has the flexibility to pursue the activity that is the highest priority at that time.
     Autonomy over one’s schedule also provides the flexibility to suggest alternative times for meetings that are deemed necessary.  This allows the person to protect his/her most productive times for activities that require high levels of energy and concentration, while attending meetings during the lulls.  Changeovers and other activities that may require special attention can also be sheltered in the schedule this way.
     To ensure continued productivity, a prioritized task list should accompany an individual’s zero-based schedule.  Upon completion of each task, the next on the list is undertaken.  During the in-between times that occur, a lower-priority task may be undertaken, simply because it can be completed in the time available.  This does not create decision fatigue; it is merely a process of comparing the estimated time required to the time available.
     The prioritized list also contributes more to one’s focus on goals than a fully-loaded schedule.  A list of required activities that stretches into the foreseeable future is more motivating than defining a time to end one activity and begin another.  It can provide a sense of purpose and direction that a shorter-term view cannot, a stronger deterrent to manifestations of Parkinson’s Law than a fully-loaded schedule.  A person can also get back on track immediately following an emergency situation without the need to review and reschedule activities.
 
     Managers can support their teams’ ZBS efforts by establishing guidelines for meetings and other requests for team members’ time.  Limits may be set on the number of attendees, ensuring that all those in attendance have an opportunity to contribute.  Without opportunity to contribute, there is likely little justification for the time expenditure.
     Limits may also be set on the duration of any single meeting or the total in any day or week to ensure adequate time for projects, experimentation, and contemplation.  A variation on this approach is to declare “meeting-free days,” reducing the strain of meeting preparation, attendance, and follow-up on team members’ attention.  To make the most of the meetings that remain, review “Meetings:  Now available in Productive Format!” for more tips.
     Managers may need to review requests that team members have declined.  A decision may be overridden, but it must be done judiciously to maintain a culture of autonomy.  It may be more prudent to assign the task to another individual possessing the requisite knowledge and skills.  A manager could also accept the assignment oneself, until such time that a qualified team member is available to assume responsibility.
 
     An individual’s attempts to utilize Zero-Based Scheduling can be thwarted by unsupportive organizational norms and policies.  For best results, upper management must perceive the value of each team member’s critical assessment of the demands on his/her time.  If only value-added activities are allowed to reside on team members’ schedules, the entire organization becomes more productive.
 
     For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.

References
[Link] “How a Zero-Based Schedule Supercharges Your Productivity & Performance.”  The BestSelf Hub, January, 23, 2020.
[Link] “How a ‘Zero-Based’ Calendar Can Supercharge Your Productivity.”  Melanie Deziel; Inc.com.
[Link] “Let’s Audit Your Calendar.  It Will Only Hurt A Little.”  Darrah Brustein; Inc.com.
[Link] “Three Reasons You Should Create a Zero-Based Schedule.”  Andrea Silvershein; Ellevate, 2018.
[Link] “Help Your Team Spend Time on the Right Things.”  Ron Ashkenas and Amy McDougall; Harvard Business Review, October 23, 2014.
[Link] “Your Scarcest Resource.”  Michael Mankins, Chris Brahm, and Greg Caimi; Harvard Business Review, May, 2014.
[Link] “Management Tools 2017:  An executive’s guide.”  Darrell K. Rigby; Bain & Company, Inc., 2017.
[Link] “The Surprising Impact of Meeting-Free Days.”  Ben Laker, Vijay Pereira, Pawan Budhwar, and Ashish Malik; MIT Sloan Management Review, January 18, 2022.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>