Unintended consequences come in many forms and have many causes. “Revenge effects” are a special category of unintended consequences, created by the introduction of a technology, policy, or both that produces outcomes in contradiction to the desired result. Revenge effects may exacerbate the original problem or create a new situation that is equally undesirable if not more objectionable.
Discussions of revenge effects often focus on technology – the most tangible cause of a predicament. However, “[t]echnology alone usually doesn’t produce a revenge effect.” It is typically the combination of technology, policy, (laws, regulations, etc.), and behavior that endows a decision with the power to frustrate its own intent.
This installment of “The Third Degree” explores five types of revenge effects, differentiates between revenge and other effects, and discusses minimizing unanticipated unfavorable outcomes.
The Law of Unintended Consequences can be stated in many ways. The formulation forming the basis of this discussion is as follows:
“The Law of Unintended Consequences states that every decision or action produces outcomes that were not motivations for, or objectives of, the decision or action.”
Like many definitions, this statement of “the law” may seem obscure to some and obvious to others. This condition is often evidence of significant nuance. In the present case, much of the nuance has developed as a result of the morphing use of terms and the contexts in which these terms are most commonly used.
The transformation of terminology, examples of unintended consequences, how to minimize negative effects, and more are explored in this installment of “The Third Degree.”
An organization’s safety-related activities are critical to its performance and reputation. The profile of these activities rises with public awareness or concern. Nuclear power generation, air travel, and freight transportation (e.g. railroads) are commonly-cited examples of high-profile industries whose safety practices are routinely subject to public scrutiny.
When addressing “the public,” representatives of any organization are likely to speak in very different terms than those presented to them by technical “experts.” After all, references to failure modes, uncertainties, mitigation strategies, and other safety-related terms are likely to confuse a lay audience and may have an effect opposite that desired. Instead of assuaging concerns with obvious expertise, speaking above the heads of concerned citizens may prompt additional demands for information, prolonging the organization’s time in an unwanted spotlight.
In the example cited above, intentional obfuscation may be used to change the beliefs of an external audience about the safety of an organization’s operations. This scenario is familiar to most; myriad examples are provided by daily “news” broadcasts. In contrast, new information may be shared internally, with the goal of increasing knowledge of safety, yet fail to alter beliefs about the organization’s safety-related performance. This phenomenon, much less familiar to those outside “the safety profession,” has been dubbed “probative blindness.” This installment of “The Third Degree” serves as an introduction to probative blindness, how to recognize it, and how to combat it.
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC