An organization’s safety-related activities are critical to its performance and reputation. The profile of these activities rises with public awareness or concern. Nuclear power generation, air travel, and freight transportation (e.g. railroads) are commonly-cited examples of high-profile industries whose safety practices are routinely subject to public scrutiny.
When addressing “the public,” representatives of any organization are likely to speak in very different terms than those presented to them by technical “experts.” After all, references to failure modes, uncertainties, mitigation strategies, and other safety-related terms are likely to confuse a lay audience and may have an effect opposite that desired. Instead of assuaging concerns with obvious expertise, speaking above the heads of concerned citizens may prompt additional demands for information, prolonging the organization’s time in an unwanted spotlight.
In the example cited above, intentional obfuscation may be used to change the beliefs of an external audience about the safety of an organization’s operations. This scenario is familiar to most; myriad examples are provided by daily “news” broadcasts. In contrast, new information may be shared internally, with the goal of increasing knowledge of safety, yet fail to alter beliefs about the organization’s safety-related performance. This phenomenon, much less familiar to those outside “the safety profession,” has been dubbed “probative blindness.” This installment of “The Third Degree” serves as an introduction to probative blindness, how to recognize it, and how to combat it.
There are three types of safety activity mentioned throughout this discussion of probative blindness (PB), often without being explicitly identified. They are assessment, ensurance, and assurance. Assessment activities are susceptible to PB; these activities seek to update an organization’s understanding of its safety performance and causes of accidents, but may fail to do so.
Ensurance activities are conducted to increase the safety of operations as a result of successful assessments (beliefs about safety have changed). Assurance activities seek to increase confidence in the organization’s safety performance. This may involve publicizing statistics or describing safety-related features of a system or product.
The term “probative blindness” was coined by Drew Rae, an Australian academic, safety researcher, and advocate, and his colleagues. It is used to describe activities that increase stakeholders’ subjective confidence in safety practices beyond that which is warranted by the insight provided or knowledge created about an organization’s actual safety-related performance. Stated another way, activities that reduce perceived risk, while leaving actual risk unchanged, exhibit probative blindness.
It is worthwhile to reiterate, explicitly, that PB is a characteristic of the activity, not the participants. While use of the term in this way may not be highly intuitive, it will be maintained to avoid further confusion. The relevance of the concept to the pursuit of safety, in any context, justifies tolerating this minor inconvenience.
According to Rae, et al , “[p]robative blindness requires:
1. a specific activity, conducted at a particular time;
2. an intent or belief that the activity provides new information about safety; and
3. no change in organisational belief about safety as a result of the activity.”
The specific activity could be a Fault Tree Analysis (FTA), Failure Modes and Effects Analysis (see the “FMEA” series), Hazard and Operability Studies (HAZOPS), or other technique intended to inform the organization about hazards, risks, and mitigation strategies. The intent to provide new information differentiates the activity from pep talks, platitudes, and public affirmations. Conducting an activity with the objective of acquiring new information does not ensure its achievement, however.
A failure to provide new information is one mechanism by which the third element of probative blindness manifests. It also occurs when the results or conclusions of the safety activity are rejected, or dismissed as faulty, for any reason. This could stem from cognitive biases or distrust of the activity’s participants, particularly its leaders or spokesmen.
Precursors of Blindness
There are several conditions that may exist in an organization than can make it more susceptible to probative blindness. A “strong prior belief” in the safety of operations may lead managers to discount evidence of developing issues. Presumably, the belief is justified by past performance; however, the accuracy of prior assessments is irrelevant. Only current conditions should be considered.
Preventing assessments of past performance from clouding judgment of current conditions becomes more difficult when success is defined by a lack of discovery of safety issues. Deteriorating conditions are often ignored or discounted because an operation is “historically safe.” Difficulty in spurring an appropriate response to a newfound safety issue, whether due to nontechnical (e.g. “political”) resistance, resource shortages, or other capacity limitation, may bolster the tendency to rely on past performance for predictions of future risk.
A “strong desire for certainty and closure,” such as that to which the public assurance scenario, cited previously, alludes, may lead an organization to focus on activity rather than results. For the uninitiated, activity and progress can be difficult to distinguish. The goal is to assuage concerns about an operation’s safety, irrespective of the actual state of affairs.
When organizational barriers exist between the design, operations, and safety functions, thorough analysis becomes virtually impossible. Limited communication amongst these groups leads to a dearth of information, inviting a shift of focus from safety improvement to mere compliance. Analysis is replaced by “box-checking exercises” that have no potential to increase knowledge or change beliefs about operational safety. The illusion of safety is created with no impact on safety itself.
Exhibit 1 provides a mapping of manifestations of PB. Some of the mechanisms cited, such as failure to identify a hazard due to human fallibility, are somewhat obvious causes. Others require further consideration to appreciate their implications.
“Double dipping” is the practice of repeating a test until an acceptable result is obtained. This term may be misleading, as it often requires many more than two attempts to satisfy expectations. The more iterations or creativity of justifications for modifying parameters required, the more egregious and unscientific this violation of proper protocol becomes.
Changing the analysis and post-hoc rationalisation can be used to end the iterative cycle of testing by modifying the interpretation of results, or the objective, to reach a “satisfactory” conclusion. Real hazard and risk information is thus missing from analysis reports and withheld from decision-makers.
Motivated skepticism involves cognitive biases influencing the interpretation of, or confidence in, analysis results. Confirmation bias leads decision-makers to reject undesirable results, holding activities to a different standard than those producing more favorable results. Reinterpretation may be attributed to normalcy bias, where aberrant system behavior is trivialized. Seeking a second opinion is similar to double dipping; several opinions may be received before one is deemed acceptable.
A valid analysis can be nullified by the inability to communicate uncertainty. All analyses are subject to uncertainty; an appropriate response to analysis results or recommendations requires an understanding of that uncertainty. If the analysis team cannot express it in appropriate terms and context, the uncertainty could be transferred, in the minds of decision-makers, to the validity of the analysis.
Accidents occur for many reasons; probative blindness is one model used to describe an organization’s understanding of the causes of an accident. A brief discussion of others provides additional clarity for PB by contrasting it with the other models. A summary of the models discussed below is shown in Exhibit 2.
To restate, probative blindness occurs when safety analysis prompts no response indicating a change in beliefs about safety-relevant conditions has taken place. Relevant information includes the existence of a hazard, risks associated with an acknowledged hazard, and the effectiveness of mitigation strategies related to an acknowledged hazard.
The “Irrelevant” model of safety activity pertains to activities cited to demonstrate an organization’s concern for safety, though they are unrelated to a specific analysis or accident under investigation. Citing these activities is often a “damage control” effort; spokesmen attempt to preserve an organization’s reputation in the face of adverse events. In the “Aware but Powerless” model, activities are “neither blind nor safe.” The organization is aware of safety issues, but responses to them, if undertaken, are ineffective.
Each of the models discussed thus far include activities that, arguably, demonstrate concern for safety. They differ in the influence that concern has on safety activity, but none improve accident prevention.
Activities in the Aware but Powerless model, as made clear by its name, also demonstrate awareness of hazards within the organization. Only one other does so – the “Lack of Concern” model. In this model, both insufficient analysis and insufficient response to known hazards are present. The underlying rationale is a subject for secondary analysis; overconfidence and callousness are possible motives for neglecting safety activity.
In the final two models, activities fail to demonstrate either awareness or concern, suggesting the absence of both. The direction of the causal relationship between the two deficiencies, if there is one, will vary in differing scenarios. In the “Nonprobative” model, activities are not intended to discover causes of accidents or address specific safety concerns in any meaningful way. Therefore, no awareness is generated; the absence of concern could be a cause or an outcome of pursuing nonprobative activities.
The final model, and the simplest, is “Insufficient Safety Analysis,” wherein activities that could have revealed potential causes of accidents were not conducted. The reasons for omission are, once again, the subject of secondary analysis that may reveal staffing shortages, lack of expertise in the organization, or other contributory factors. Inactivity, like nonprobative activity, could be a byproduct of a lack of awareness or of concern. The interplay of awareness, concern, and activity is presented pictorially in Exhibit 3.
Unfortunately, there is no silver bullet that will prevent probative blindness in all of an organization’s activities. However, following a few simple rules will significantly improve visibility of safety-relevant conditions:
For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
[Link]  “Probative Blindness: How Safety Activity can fail to Update Beliefs about Safety.” A.J. Rae, J. A. McDermid, R.D. Alexander, and M. Nicholson. 9th IET International Conference on System Safety and Cyber Security 2014.
[Link]  “Probative blindness and false assurance about safety.” Andrew John Rae, and Rob D. Alexander. Safety Science, February 2017.
Jody W. Phelps, MSc, PMP®, MBA
JayWink Solutions, LLC
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC