In common language, “materiality” could be replaced with “importance” or “relevance.” In a business setting, however, the word has greater significance; no adequate substitute is available. In this context, materiality is not a binary characteristic, or even a one-dimensional spectrum; instead it lies in a two-dimensional array.
Materiality has been defined in a multitude of ways by numerous organizations. Though these organizations have developed their definitions independently, to serve their own purposes, there is a great deal of overlap in both. Perhaps the simplest and, therefore, most broadly-applicable description of materiality was provided by the GHG Protocol:
“Information is considered to be material if, by its inclusion or exclusion, it can be seen to influence any decisions or actions taken by users of it.”
Recognizing the proliferation and potential risk of divergent definitions, several organizations that develop corporate reporting standards and assessments published a consensus definition in 2016:
“Material information is any information which is reasonably capable of making a difference to the conclusions reasonable stakeholders may draw when reviewing the related information.” (IIRC, GRI, SASB, CDP, CDSB, FASB, IASB/IFRS, ISO)
The consensus definition is still somewhat cryptic, only alluding to the reason for its existence – corporate financial and ESG (Environmental, Social, Governance) reporting. As much can be surmised from the list of signatory organizations as from the definition itself.
The work balance chart is a critical component of a line balancing effort. It is both the graphical representation of the allocation of task time among operators, equipment, and transfers in a manufacturing or service process and a tool used to achieve an equal distribution.
Like other tools discussed in “The Third Degree,” a work balance chart may be referenced by other names in the myriad resources available. It is often called an operator balance chart, a valid moniker if only manual tasks are considered. It is also known as a Yamazumi Board. “Yamazumi” is Japanese for “stack up;” this term immediately makes sense when an example chart is seen, but requires an explanation to every non-Japanese speaker one encounters. Throughout the following presentation, “work balance chart,” or “WBC,” is used to refer to this tool and visual aid. This term is the most intuitive and characterizes the tool’s versatility in analyzing various forms of “work.”
A precedence diagram is a building block for more advanced techniques in operations and project management. Precedence diagrams are used as inputs to PERT and Gantt charts, line balancing, and Critical Path Method (topics of future installments of “The Third Degree.”)
Many resources discuss precedence diagramming as a component of the techniques mentioned above. However, the fact that it can be used for each of these purposes, and others, warrants a separate treatment of the topic. Separate treatment is also intended to encourage reuse, increasing the value of each diagram created.
A cause & effect diagram is best conceptualized as a specialized application and extension of an affinity diagram. Both types of diagram can be used for proactive (e.g. development planning) or reactive (e.g. problem-solving) purposes. Both use brainstorming techniques to collect information that is sorted into related groups. Where the two diverge is in the nature of relationships represented.
An affinity diagram may present several types of relationships among pieces of information collected. A cause & effect diagram, in contrast, is dedicated to a single relationship and its “direction,” namely, what is cause and what is effect.
In many organizations, complaints can be heard that there are too many programs and initiatives targeting too many objectives. These complaints may come from staff or management; many of them may even be valid. The response to this situation, however, is often misguided and potentially dangerous.
To streamline efforts and improve performance – ostensibly, at least – managers and executives may discontinue or merge programs. Done carelessly, consolidation can be de facto termination. A particularly egregious example of this practice is to combine safety and 5S.
An affinity diagram may stretch the definition of “map,” but perhaps not as much as it first appears. Affinity diagrams map regions of thought, or attention, within a world of unorganized data and information.
Mapping regions of thought in an affinity diagram can aid various types of projects, including product or service development, process development or troubleshooting, logistics, marketing, and safety, health, and environmental sustainability initiatives. In short, nearly any problem or opportunity an organization faces can benefit from the use of this simple tool.
Numerous resources providing descriptions of affinity diagrams are available. It is the aim of “The Third Degree” to provide a more helpful resource than these often bland or confusing articles by adding nuance, insight, and tips for effective use of the tools discussed in these pages. In this tradition, the discussion of affinity diagrams that follows presents alternative approaches with the aim of maximizing the tool’s utility to individuals and organizations.
Most manufactured goods are produced and distributed to the marketplace where consumers are then sought. Services, in contrast, are not “produced” until there is a “consumer.” Simultaneous production and consumption is a hallmark of service; no inventory can be accumulated to compensate for fluctuating demand.
Instead, demand must be managed via predictable performance and efficiency. A service blueprint documents how a service is delivered, delineating customer actions and corresponding provider activity. Its pictorial format facilitates searches for improvements in current service delivery and identification of potential complementary offerings. A service blueprint can also be created proactively to optimize a delivery system before a service is made available to customers.
Claims about the impact of sustainability initiatives – or the lack thereof – on a company’s financial performance are prevalent in media. Claims cover the spectrum from crippling, through negligible, to transformative. Articles making these claims target audiences ranging from corporate executives to non-industry activists, politicians, and regulators. Likewise, the articles cite vastly differing sources to support claims.
These articles are often rife with unsupported claims and inconsistencies, are poorly sourced, poorly written, and dripping with bias. The most egregious are often rewarded with “likes,” “shares,” and additional “reporting” by equally biased purveyors of “the word.” These viewpoint warriors create a fog of war that makes navigating the mine-laden battlefield of stakeholder interests incredibly treacherous.
The fog of war is penetrated by stepping outside the chaos to collect and analyze relevant information. To do this in the sustainability vs. profitability context, a group from NYU Stern Center for Sustainable Business have developed the Return on Sustainability Investment (ROSI) framework. ROSI ends reliance on the incessant cascades of conflicting claims, providing a structure for investigating the impacts of sustainability initiatives on an organization’s financial performance.
Unintended consequences come in many forms and have many causes. “Revenge effects” are a special category of unintended consequences, created by the introduction of a technology, policy, or both that produces outcomes in contradiction to the desired result. Revenge effects may exacerbate the original problem or create a new situation that is equally undesirable if not more objectionable.
Discussions of revenge effects often focus on technology – the most tangible cause of a predicament. However, “[t]echnology alone usually doesn’t produce a revenge effect.” It is typically the combination of technology, policy, (laws, regulations, etc.), and behavior that endows a decision with the power to frustrate its own intent.
This installment of “The Third Degree” explores five types of revenge effects, differentiates between revenge and other effects, and discusses minimizing unanticipated unfavorable outcomes.
The Law of Unintended Consequences can be stated in many ways. The formulation forming the basis of this discussion is as follows:
“The Law of Unintended Consequences states that every decision or action produces outcomes that were not motivations for, or objectives of, the decision or action.”
Like many definitions, this statement of “the law” may seem obscure to some and obvious to others. This condition is often evidence of significant nuance. In the present case, much of the nuance has developed as a result of the morphing use of terms and the contexts in which these terms are most commonly used.
The transformation of terminology, examples of unintended consequences, how to minimize negative effects, and more are explored in this installment of “The Third Degree.”
An organization’s safety-related activities are critical to its performance and reputation. The profile of these activities rises with public awareness or concern. Nuclear power generation, air travel, and freight transportation (e.g. railroads) are commonly-cited examples of high-profile industries whose safety practices are routinely subject to public scrutiny.
When addressing “the public,” representatives of any organization are likely to speak in very different terms than those presented to them by technical “experts.” After all, references to failure modes, uncertainties, mitigation strategies, and other safety-related terms are likely to confuse a lay audience and may have an effect opposite that desired. Instead of assuaging concerns with obvious expertise, speaking above the heads of concerned citizens may prompt additional demands for information, prolonging the organization’s time in an unwanted spotlight.
In the example cited above, intentional obfuscation may be used to change the beliefs of an external audience about the safety of an organization’s operations. This scenario is familiar to most; myriad examples are provided by daily “news” broadcasts. In contrast, new information may be shared internally, with the goal of increasing knowledge of safety, yet fail to alter beliefs about the organization’s safety-related performance. This phenomenon, much less familiar to those outside “the safety profession,” has been dubbed “probative blindness.” This installment of “The Third Degree” serves as an introduction to probative blindness, how to recognize it, and how to combat it.
Another way to Be A Zero – in a good, productive way – is to operate on a zero-based schedule. An organization’s time is the aggregate of individuals’ time and is often spent carelessly. When a member of an organization spends time on any endeavor, the organization’s time is being spent. When groups are formed, the expenditure of time multiplies. Time is the one resource that cannot be increased by persuasive salespeople, creative marketing, strategic partnerships, or other strategy; it must be managed.
“Everyone” in business knows that “time is money;” it only makes sense that time should be budgeted as carefully as financial resources. Like ZBB (Zero-Based Budgeting – Part 1), Zero-Based Scheduling (ZBS) can be approached in two ways; one ends at zero, the other begins there.
Interest in Zero-Based Budgeting (ZBB) is somewhat cyclical, rising in times of financial distress, nearly disappearing in boom-times. This can be attributed, in large part, to detractors instilling fear in managers by depicting it as a “slash and burn” cost-cutting, or downsizing, technique. This is a gross misrepresentation of the ZBB process.
ZBB is applicable to the public sector (various levels of government), private sector (not-for-profit and commercial businesses), and the very private sector (personal finances). Each sector is unique in its execution of ZBB, but the principle of aligning expenditure with purpose is consistent throughout.
This installment of “The Third Degree” describes the ZBB process in each sector, compares it to “traditional” budgeting, and explores its advantages and disadvantages. Alternative implementation strategies that facilitate matching the ZBB approach to an organization’s circumstances are also presented.
To be effective in any pursuit, one must understand its objectives and influences. One influence, typically, has a greater impact on process performance than all others – the dominant characteristic of the process. The five main categories of process dominance are worker, setup, time, component, and information.
Processes require tools tailored to manage the dominant characteristic; this set of tools comprises a process control system. The levels of Operations Management at which the tools are employed, or the skills and responsibility for process performance reside, differ among the types of dominance.
This installment of “The Third Degree” explores categories of process dominance, tools available to manage them, and examples of processes with each dominant characteristic. Responsibility for control of processes exhibiting each category of dominance will also be discussed in terms of the “Eight Analogical Levels of Operations Management.”
Effective Operations Management requires multiple levels of analysis and monitoring. Each level is usually well-defined within an organization, though they may vary among organizations and industries. The size of an organization has a strong influence on the number of levels and the makeup and responsibilities of each.
In this installment of “The Third Degree,” one possible configuration of Operations Management levels is presented. To justify, or fully utilize, eight distinct levels of Operations Management, it is likely that an organization so configured is quite large. Therefore, the concepts presented should be applied to customize a configuration appropriate for a specific organization.
Standards and guidelines published by industry groups or standards organizations typically undergo an extensive review process prior to acceptance. A number of drafts may be required to refine the content and format into a structure approved by a committee of decision-makers.
As one might expect, the draft review and approval process is not consistent for every publication. The number of drafts, time to review, and types of changes requested will vary. Though each review is intended to be rigorous, errors often remain in the approved publication. The content may also require interpretation to employ effectively.
This is certainly true of the aligned AIAG/VDA FMEA Handbook. In this installment of the “FMEA” series, the Handbook’s errors and omissions, opacities and ambiguities will be discussed. Where possible, mistakes will be corrected, blanks filled in, and clarity provided in pursuit of greater utility of the Handbook for all FMEA practitioners.
To conduct a Process FMEA according to AIAG/VDA alignment, the seven-step approach presented in Vol. VI (Aligned DFMEA) is used. The seven steps are repeated with a new focus of inquiry. Like the DFMEA, several system-, subsystem-, and component-level analyses may be required to fully understand a process.
Paralleling previous entries in the “FMEA” series, this installment presents the 7-step aligned approach applied to process analysis and the “Standard PFMEA Form Sheet.” Review of classical FMEA and aligned DFMEA is recommended prior to pursuing aligned PFMEA; familiarity with the seven steps, terminology used, and documentation formats will make aligned PFMEA more comprehensible.
Suppliers producing parts for automotive manufacturers around the world have always been subject to varying documentation requirements. Each OEM (Original Equipment Manufacturer) customer defines its own requirements; these requirements are strongly influenced by the geographic location in which they reside.
In an effort to alleviate confusion and the documentation burden of a global industry, AIAG (Automotive Industry Action Group) of North America and VDA (Verband der Automobilindustrie) of Germany jointly published the aligned “FMEA Handbook” in 2019. Those experienced with “classical” FMEA (Vol. III, Vol. IV) will recognize its influence in the new “standard;” however, there are significant differences that require careful consideration to ensure a successful transition.
Failure Modes and Effects Analysis (FMEA) is most commonly used in product design and manufacturing contexts. However, it can also be helpful in other applications, such as administrative functions and service delivery. Each application context may require refinement of definitions and rating scales to provide maximum clarity, but the fundamentals remain the same.
Several standards have been published defining the structure and content of Failure Modes and Effects Analyses (FMEAs). Within these standards, there are often alternate formats presented for portions of the FMEA form; these may also change with subsequent revisions of each standard.
Add to this variety the diversity of industry and customer-specific requirements. Those unbeholden to an industry-specific standard are free to adapt features of several to create a unique form for their own purposes. The freedom to customize results in a virtually limitless number of potential variants.
Few potential FMEA variants are likely to have broad appeal, even among those unrestricted by customer requirements. This series aims to highlight the most practical formats available, encouraging a level of consistency among practitioners that maintains Failure Modes and Effects Analysis as a portable skill. Total conformity is not the goal; presenting perceived best practices is.
Thus far, the “Making Decisions” series has presented tools and processes used primarily for prioritization or single selection decisions. Decision trees, in contrast, can be used to aid strategy decisions by mapping a series of possible events and outcomes.
Its graphical format allows a decision tree to present a substantial amount of information, while the logical progression of strategy decisions remains clear and easy to follow. The use of probabilities and monetary values of outcomes provides for a straightforward comparison of strategies.
A Pugh Matrix is a visual aid created during a decision-making process. It presents, in summary form, a comparison of alternatives with respect to critical evaluation criteria. As is true of other decision-making tools, a Pugh Matrix will not “make the decision for you.” It will, however, facilitate rapidly narrowing the field of alternatives and focusing attention on the most viable candidates.
A useful way to conceptualize the Pugh Matrix Method is as an intermediate-level tool, positioned between the structured, but open Rational Model (Vol. II) and the thorough Analytic Hierarchy Process (AHP, Vol. III). The Pugh Matrix is more conclusive than the former and less complex than the latter.
Committing resources to project execution is a critical responsibility for any organization or individual. Executing poor-performing projects can be disastrous for sponsors and organizations; financial distress, reputational damage, and sinking morale, among other issues, can result. Likewise, rejecting promising projects can limit an organization’s success by any conceivable measure.
The risks inherent in project selection compels sponsors and managers to follow an objective and methodical process to make decisions. Doing so leads to project selection decisions that are consistent, comparable, and effective. Review and evaluation of these decisions and their outcomes also becomes straightforward.
An effective safety program requires identification and communication of hazards that exist in a workplace or customer-accessible area of a business and the countermeasures in place to reduce the risk of an incident. The terms hazard, risk, incident, and others are used here as defined in “Safety First! Or is It?”
A hazard map is a highly-efficient instrument for conveying critical information regarding Safety, Health, and Environmental (SHE) hazards due to its visual nature and standardization. While some countermeasure information can be presented on a Hazard Map, it is often more salient when presented on a corollary Body Map. Use of a body map is often a prudent choice; typically, the countermeasure information most relevant to many individuals pertains to the use of personal protective equipment (PPE). The process used to develop a Hazard Map and its corollary Body Map will be presented.
Many organizations adopt the “Safety First!” mantra, but what does it mean? The answer, of course, differs from one organization, person, or situation to another. If an organization’s leaders truly live the mantra, its meaning will be consistent across time, situations, and parties involved. It will also be well-documented, widely and regularly communicated, and supported by action.
In short, the “Safety First!” mantra implies that an organization has developed a safety culture. However, many fall far short of this ideal; often it is because leaders believe that adopting the mantra will spur the development of safety culture. In fact, the reverse is required; only in a culture of safety can the “Safety First!” mantra convey a coherent message or be meaningful to members of the organization.
Choosing effective strategies for waging war against error in manufacturing and service operations requires an understanding of “the enemy.” The types of error to be combatted, the sources of these errors, and the amount of error that will be tolerated are important components of a functional definition (see Vol. I for an introduction).
The traditional view is that the amount of error to be accepted is defined by the specification limits of each characteristic of interest. Exceeding the specified tolerance of any characteristic immediately transforms the process output from “good” to “bad.” This is a very restrictive and misleading point of view. Much greater insight is provided regarding product performance and customer satisfaction by loss functions.
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC